Feb 12 21:51:33.660859 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 12 21:51:33.660874 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 21:51:33.660880 kernel: Disabled fast string operations Feb 12 21:51:33.660884 kernel: BIOS-provided physical RAM map: Feb 12 21:51:33.660888 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Feb 12 21:51:33.660892 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Feb 12 21:51:33.660898 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Feb 12 21:51:33.660902 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Feb 12 21:51:33.660906 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Feb 12 21:51:33.660910 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Feb 12 21:51:33.660914 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Feb 12 21:51:33.660918 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Feb 12 21:51:33.660922 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Feb 12 21:51:33.660926 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 12 21:51:33.660932 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Feb 12 21:51:33.660937 kernel: NX (Execute Disable) protection: active Feb 12 21:51:33.660941 kernel: SMBIOS 2.7 present. Feb 12 21:51:33.660946 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Feb 12 21:51:33.660950 kernel: vmware: hypercall mode: 0x00 Feb 12 21:51:33.660955 kernel: Hypervisor detected: VMware Feb 12 21:51:33.660960 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Feb 12 21:51:33.660964 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Feb 12 21:51:33.660969 kernel: vmware: using clock offset of 4491478142 ns Feb 12 21:51:33.660973 kernel: tsc: Detected 3408.000 MHz processor Feb 12 21:51:33.660978 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 12 21:51:33.660983 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 12 21:51:33.660988 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Feb 12 21:51:33.660992 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 12 21:51:33.660996 kernel: total RAM covered: 3072M Feb 12 21:51:33.661002 kernel: Found optimal setting for mtrr clean up Feb 12 21:51:33.661007 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Feb 12 21:51:33.661011 kernel: Using GB pages for direct mapping Feb 12 21:51:33.661016 kernel: ACPI: Early table checksum verification disabled Feb 12 21:51:33.661020 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Feb 12 21:51:33.661025 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Feb 12 21:51:33.661029 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Feb 12 21:51:33.661034 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Feb 12 21:51:33.661039 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Feb 12 21:51:33.661043 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Feb 12 21:51:33.661049 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Feb 12 21:51:33.661055 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Feb 12 21:51:33.661060 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Feb 12 21:51:33.661065 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Feb 12 21:51:33.661070 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Feb 12 21:51:33.661076 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Feb 12 21:51:33.661097 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Feb 12 21:51:33.661107 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Feb 12 21:51:33.661112 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Feb 12 21:51:33.661124 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Feb 12 21:51:33.661130 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Feb 12 21:51:33.661135 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Feb 12 21:51:33.661139 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Feb 12 21:51:33.661144 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Feb 12 21:51:33.661151 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Feb 12 21:51:33.661156 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Feb 12 21:51:33.661161 kernel: system APIC only can use physical flat Feb 12 21:51:33.661165 kernel: Setting APIC routing to physical flat. Feb 12 21:51:33.661170 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 12 21:51:33.661175 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Feb 12 21:51:33.661180 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Feb 12 21:51:33.661188 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Feb 12 21:51:33.661193 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Feb 12 21:51:33.661201 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Feb 12 21:51:33.661209 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Feb 12 21:51:33.661214 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Feb 12 21:51:33.661223 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Feb 12 21:51:33.661230 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Feb 12 21:51:33.661240 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Feb 12 21:51:33.661250 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Feb 12 21:51:33.661258 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Feb 12 21:51:33.661263 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Feb 12 21:51:33.661268 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Feb 12 21:51:33.661276 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Feb 12 21:51:33.661283 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Feb 12 21:51:33.661288 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Feb 12 21:51:33.661293 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Feb 12 21:51:33.661298 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Feb 12 21:51:33.661303 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Feb 12 21:51:33.661310 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Feb 12 21:51:33.661315 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Feb 12 21:51:33.661319 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Feb 12 21:51:33.661325 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Feb 12 21:51:33.661335 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Feb 12 21:51:33.661343 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Feb 12 21:51:33.661351 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Feb 12 21:51:33.661360 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Feb 12 21:51:33.661368 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Feb 12 21:51:33.661373 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Feb 12 21:51:33.661378 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Feb 12 21:51:33.661383 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Feb 12 21:51:33.661388 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Feb 12 21:51:33.661394 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Feb 12 21:51:33.661399 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Feb 12 21:51:33.661404 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Feb 12 21:51:33.661409 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Feb 12 21:51:33.661414 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Feb 12 21:51:33.661419 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Feb 12 21:51:33.661424 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Feb 12 21:51:33.661431 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Feb 12 21:51:33.661436 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Feb 12 21:51:33.661442 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Feb 12 21:51:33.661450 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Feb 12 21:51:33.661455 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Feb 12 21:51:33.661462 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Feb 12 21:51:33.661469 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Feb 12 21:51:33.661474 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Feb 12 21:51:33.661480 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Feb 12 21:51:33.661486 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Feb 12 21:51:33.661491 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Feb 12 21:51:33.661496 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Feb 12 21:51:33.661501 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Feb 12 21:51:33.661507 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Feb 12 21:51:33.661512 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Feb 12 21:51:33.661517 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Feb 12 21:51:33.661522 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Feb 12 21:51:33.661526 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Feb 12 21:51:33.661531 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Feb 12 21:51:33.661536 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Feb 12 21:51:33.661545 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Feb 12 21:51:33.661551 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Feb 12 21:51:33.661556 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Feb 12 21:51:33.661561 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Feb 12 21:51:33.661566 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Feb 12 21:51:33.661572 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Feb 12 21:51:33.661578 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Feb 12 21:51:33.661583 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Feb 12 21:51:33.661588 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Feb 12 21:51:33.661593 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Feb 12 21:51:33.661598 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Feb 12 21:51:33.661604 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Feb 12 21:51:33.661609 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Feb 12 21:51:33.661615 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Feb 12 21:51:33.661620 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Feb 12 21:51:33.661625 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Feb 12 21:51:33.661630 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Feb 12 21:51:33.661635 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Feb 12 21:51:33.661640 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Feb 12 21:51:33.661646 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Feb 12 21:51:33.661652 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Feb 12 21:51:33.661657 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Feb 12 21:51:33.661662 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Feb 12 21:51:33.661667 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Feb 12 21:51:33.661673 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Feb 12 21:51:33.661678 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Feb 12 21:51:33.661683 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Feb 12 21:51:33.661689 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Feb 12 21:51:33.661694 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Feb 12 21:51:33.661699 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Feb 12 21:51:33.661705 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Feb 12 21:51:33.661710 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Feb 12 21:51:33.661744 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Feb 12 21:51:33.661750 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Feb 12 21:51:33.661755 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Feb 12 21:51:33.661760 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Feb 12 21:51:33.661765 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Feb 12 21:51:33.661770 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Feb 12 21:51:33.661776 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Feb 12 21:51:33.661781 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Feb 12 21:51:33.661788 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Feb 12 21:51:33.661793 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Feb 12 21:51:33.661798 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Feb 12 21:51:33.661803 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Feb 12 21:51:33.661809 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Feb 12 21:51:33.661814 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Feb 12 21:51:33.661819 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Feb 12 21:51:33.661824 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Feb 12 21:51:33.661829 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Feb 12 21:51:33.661837 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Feb 12 21:51:33.661845 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Feb 12 21:51:33.661851 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Feb 12 21:51:33.661856 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Feb 12 21:51:33.661861 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Feb 12 21:51:33.661866 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Feb 12 21:51:33.661872 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Feb 12 21:51:33.661877 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Feb 12 21:51:33.661882 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Feb 12 21:51:33.661887 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Feb 12 21:51:33.661894 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Feb 12 21:51:33.661899 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Feb 12 21:51:33.661904 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Feb 12 21:51:33.661909 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Feb 12 21:51:33.661915 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Feb 12 21:51:33.661920 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Feb 12 21:51:33.661925 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Feb 12 21:51:33.661930 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Feb 12 21:51:33.661935 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 12 21:51:33.661941 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 12 21:51:33.661947 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Feb 12 21:51:33.661953 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Feb 12 21:51:33.661958 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Feb 12 21:51:33.661963 kernel: Zone ranges: Feb 12 21:51:33.661969 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 12 21:51:33.661974 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Feb 12 21:51:33.661979 kernel: Normal empty Feb 12 21:51:33.661984 kernel: Movable zone start for each node Feb 12 21:51:33.661990 kernel: Early memory node ranges Feb 12 21:51:33.661996 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Feb 12 21:51:33.662001 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Feb 12 21:51:33.662007 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Feb 12 21:51:33.662012 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Feb 12 21:51:33.662017 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 12 21:51:33.662023 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Feb 12 21:51:33.662028 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Feb 12 21:51:33.662033 kernel: ACPI: PM-Timer IO Port: 0x1008 Feb 12 21:51:33.662039 kernel: system APIC only can use physical flat Feb 12 21:51:33.662044 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Feb 12 21:51:33.662050 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 12 21:51:33.662055 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 12 21:51:33.662060 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 12 21:51:33.662066 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 12 21:51:33.662071 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 12 21:51:33.662076 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 12 21:51:33.662081 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 12 21:51:33.662087 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 12 21:51:33.662092 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 12 21:51:33.662098 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 12 21:51:33.662103 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 12 21:51:33.662108 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 12 21:51:33.662114 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 12 21:51:33.662119 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 12 21:51:33.662124 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 12 21:51:33.662132 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 12 21:51:33.662140 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Feb 12 21:51:33.662147 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Feb 12 21:51:33.662154 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Feb 12 21:51:33.662161 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Feb 12 21:51:33.662166 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Feb 12 21:51:33.662172 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Feb 12 21:51:33.662179 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Feb 12 21:51:33.662188 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Feb 12 21:51:33.662197 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Feb 12 21:51:33.662205 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Feb 12 21:51:33.662214 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Feb 12 21:51:33.662220 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Feb 12 21:51:33.662227 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Feb 12 21:51:33.662232 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Feb 12 21:51:33.662240 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Feb 12 21:51:33.662246 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Feb 12 21:51:33.662251 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Feb 12 21:51:33.662257 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Feb 12 21:51:33.662264 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Feb 12 21:51:33.662269 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Feb 12 21:51:33.662274 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Feb 12 21:51:33.662282 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Feb 12 21:51:33.662291 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Feb 12 21:51:33.662300 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Feb 12 21:51:33.662307 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Feb 12 21:51:33.662314 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Feb 12 21:51:33.662319 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Feb 12 21:51:33.662324 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Feb 12 21:51:33.662331 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Feb 12 21:51:33.662340 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Feb 12 21:51:33.662346 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Feb 12 21:51:33.662354 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Feb 12 21:51:33.662360 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Feb 12 21:51:33.662365 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Feb 12 21:51:33.662370 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Feb 12 21:51:33.662375 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Feb 12 21:51:33.662381 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Feb 12 21:51:33.662388 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Feb 12 21:51:33.662396 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Feb 12 21:51:33.662405 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Feb 12 21:51:33.662414 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Feb 12 21:51:33.662419 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Feb 12 21:51:33.662425 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Feb 12 21:51:33.662430 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Feb 12 21:51:33.662438 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Feb 12 21:51:33.662443 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Feb 12 21:51:33.662448 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Feb 12 21:51:33.662454 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Feb 12 21:51:33.662463 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Feb 12 21:51:33.662470 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Feb 12 21:51:33.662476 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Feb 12 21:51:33.662484 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Feb 12 21:51:33.662491 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Feb 12 21:51:33.662497 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Feb 12 21:51:33.662503 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Feb 12 21:51:33.662512 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Feb 12 21:51:33.662518 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Feb 12 21:51:33.662524 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Feb 12 21:51:33.662529 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Feb 12 21:51:33.662536 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Feb 12 21:51:33.662541 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Feb 12 21:51:33.662547 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Feb 12 21:51:33.662554 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Feb 12 21:51:33.662560 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Feb 12 21:51:33.662565 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Feb 12 21:51:33.662570 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Feb 12 21:51:33.662575 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Feb 12 21:51:33.662582 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Feb 12 21:51:33.662592 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Feb 12 21:51:33.662597 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Feb 12 21:51:33.662603 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Feb 12 21:51:33.662610 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Feb 12 21:51:33.662615 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Feb 12 21:51:33.662621 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Feb 12 21:51:33.662626 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Feb 12 21:51:33.662631 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Feb 12 21:51:33.662636 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Feb 12 21:51:33.662643 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Feb 12 21:51:33.662648 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Feb 12 21:51:33.662653 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Feb 12 21:51:33.662658 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Feb 12 21:51:33.662663 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Feb 12 21:51:33.662669 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Feb 12 21:51:33.662674 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Feb 12 21:51:33.662679 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Feb 12 21:51:33.662685 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Feb 12 21:51:33.662690 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Feb 12 21:51:33.662696 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Feb 12 21:51:33.662702 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Feb 12 21:51:33.662707 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Feb 12 21:51:33.662737 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Feb 12 21:51:33.662743 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Feb 12 21:51:33.662748 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Feb 12 21:51:33.662754 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Feb 12 21:51:33.662759 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Feb 12 21:51:33.662764 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Feb 12 21:51:33.662770 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Feb 12 21:51:33.662776 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Feb 12 21:51:33.662781 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Feb 12 21:51:33.662786 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Feb 12 21:51:33.662791 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Feb 12 21:51:33.662797 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Feb 12 21:51:33.662802 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Feb 12 21:51:33.662807 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Feb 12 21:51:33.662812 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Feb 12 21:51:33.662817 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Feb 12 21:51:33.662824 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Feb 12 21:51:33.662829 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Feb 12 21:51:33.662835 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Feb 12 21:51:33.662840 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Feb 12 21:51:33.662845 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Feb 12 21:51:33.662850 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Feb 12 21:51:33.662856 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Feb 12 21:51:33.662861 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 12 21:51:33.662866 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Feb 12 21:51:33.662873 kernel: TSC deadline timer available Feb 12 21:51:33.662878 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Feb 12 21:51:33.662884 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Feb 12 21:51:33.662889 kernel: Booting paravirtualized kernel on VMware hypervisor Feb 12 21:51:33.662895 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 12 21:51:33.662900 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:128 nr_node_ids:1 Feb 12 21:51:33.662906 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u262144 Feb 12 21:51:33.662911 kernel: pcpu-alloc: s185624 r8192 d31464 u262144 alloc=1*2097152 Feb 12 21:51:33.662916 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Feb 12 21:51:33.662922 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Feb 12 21:51:33.662927 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Feb 12 21:51:33.662932 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Feb 12 21:51:33.662938 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Feb 12 21:51:33.662943 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Feb 12 21:51:33.662949 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Feb 12 21:51:33.662965 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Feb 12 21:51:33.662972 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Feb 12 21:51:33.662977 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Feb 12 21:51:33.662984 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Feb 12 21:51:33.662989 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Feb 12 21:51:33.662994 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Feb 12 21:51:33.663000 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Feb 12 21:51:33.663005 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Feb 12 21:51:33.663011 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Feb 12 21:51:33.663016 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Feb 12 21:51:33.663022 kernel: Policy zone: DMA32 Feb 12 21:51:33.663029 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 21:51:33.663036 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 21:51:33.663041 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Feb 12 21:51:33.663047 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Feb 12 21:51:33.663052 kernel: printk: log_buf_len min size: 262144 bytes Feb 12 21:51:33.663058 kernel: printk: log_buf_len: 1048576 bytes Feb 12 21:51:33.663063 kernel: printk: early log buf free: 239728(91%) Feb 12 21:51:33.663069 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 21:51:33.663076 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 12 21:51:33.663081 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 21:51:33.663087 kernel: Memory: 1942952K/2096628K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 153416K reserved, 0K cma-reserved) Feb 12 21:51:33.663093 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Feb 12 21:51:33.663098 kernel: ftrace: allocating 34475 entries in 135 pages Feb 12 21:51:33.663104 kernel: ftrace: allocated 135 pages with 4 groups Feb 12 21:51:33.663111 kernel: rcu: Hierarchical RCU implementation. Feb 12 21:51:33.663117 kernel: rcu: RCU event tracing is enabled. Feb 12 21:51:33.663123 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Feb 12 21:51:33.663129 kernel: Rude variant of Tasks RCU enabled. Feb 12 21:51:33.663134 kernel: Tracing variant of Tasks RCU enabled. Feb 12 21:51:33.663140 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 21:51:33.663146 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Feb 12 21:51:33.663151 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Feb 12 21:51:33.663157 kernel: random: crng init done Feb 12 21:51:33.663163 kernel: Console: colour VGA+ 80x25 Feb 12 21:51:33.663169 kernel: printk: console [tty0] enabled Feb 12 21:51:33.663175 kernel: printk: console [ttyS0] enabled Feb 12 21:51:33.663180 kernel: ACPI: Core revision 20210730 Feb 12 21:51:33.663186 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Feb 12 21:51:33.663193 kernel: APIC: Switch to symmetric I/O mode setup Feb 12 21:51:33.663200 kernel: x2apic enabled Feb 12 21:51:33.663206 kernel: Switched APIC routing to physical x2apic. Feb 12 21:51:33.663212 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 12 21:51:33.663219 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Feb 12 21:51:33.663225 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Feb 12 21:51:33.663230 kernel: Disabled fast string operations Feb 12 21:51:33.663236 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 12 21:51:33.663242 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 12 21:51:33.663248 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 12 21:51:33.663254 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 12 21:51:33.663260 kernel: Spectre V2 : Mitigation: Enhanced IBRS Feb 12 21:51:33.663265 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 12 21:51:33.663272 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 12 21:51:33.663277 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 12 21:51:33.663283 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 12 21:51:33.663289 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 12 21:51:33.663295 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 12 21:51:33.663302 kernel: SRBDS: Unknown: Dependent on hypervisor status Feb 12 21:51:33.663308 kernel: GDS: Unknown: Dependent on hypervisor status Feb 12 21:51:33.663313 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 12 21:51:33.663320 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 12 21:51:33.663326 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 12 21:51:33.663331 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 12 21:51:33.663337 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 12 21:51:33.663343 kernel: Freeing SMP alternatives memory: 32K Feb 12 21:51:33.663348 kernel: pid_max: default: 131072 minimum: 1024 Feb 12 21:51:33.663354 kernel: LSM: Security Framework initializing Feb 12 21:51:33.663360 kernel: SELinux: Initializing. Feb 12 21:51:33.663366 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 12 21:51:33.663372 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 12 21:51:33.663378 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 12 21:51:33.663384 kernel: Performance Events: Skylake events, core PMU driver. Feb 12 21:51:33.663389 kernel: core: CPUID marked event: 'cpu cycles' unavailable Feb 12 21:51:33.663395 kernel: core: CPUID marked event: 'instructions' unavailable Feb 12 21:51:33.663401 kernel: core: CPUID marked event: 'bus cycles' unavailable Feb 12 21:51:33.663406 kernel: core: CPUID marked event: 'cache references' unavailable Feb 12 21:51:33.663412 kernel: core: CPUID marked event: 'cache misses' unavailable Feb 12 21:51:33.663417 kernel: core: CPUID marked event: 'branch instructions' unavailable Feb 12 21:51:33.663424 kernel: core: CPUID marked event: 'branch misses' unavailable Feb 12 21:51:33.663429 kernel: ... version: 1 Feb 12 21:51:33.663435 kernel: ... bit width: 48 Feb 12 21:51:33.663440 kernel: ... generic registers: 4 Feb 12 21:51:33.663446 kernel: ... value mask: 0000ffffffffffff Feb 12 21:51:33.663452 kernel: ... max period: 000000007fffffff Feb 12 21:51:33.663457 kernel: ... fixed-purpose events: 0 Feb 12 21:51:33.663463 kernel: ... event mask: 000000000000000f Feb 12 21:51:33.663468 kernel: signal: max sigframe size: 1776 Feb 12 21:51:33.663475 kernel: rcu: Hierarchical SRCU implementation. Feb 12 21:51:33.663481 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 12 21:51:33.663486 kernel: smp: Bringing up secondary CPUs ... Feb 12 21:51:33.663492 kernel: x86: Booting SMP configuration: Feb 12 21:51:33.663497 kernel: .... node #0, CPUs: #1 Feb 12 21:51:33.663503 kernel: Disabled fast string operations Feb 12 21:51:33.663509 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Feb 12 21:51:33.663514 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Feb 12 21:51:33.663520 kernel: smp: Brought up 1 node, 2 CPUs Feb 12 21:51:33.663525 kernel: smpboot: Max logical packages: 128 Feb 12 21:51:33.663532 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Feb 12 21:51:33.663538 kernel: devtmpfs: initialized Feb 12 21:51:33.663543 kernel: x86/mm: Memory block size: 128MB Feb 12 21:51:33.663549 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Feb 12 21:51:33.663555 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 21:51:33.663560 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Feb 12 21:51:33.663566 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 21:51:33.663573 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 21:51:33.663582 kernel: audit: initializing netlink subsys (disabled) Feb 12 21:51:33.663589 kernel: audit: type=2000 audit(1707774692.058:1): state=initialized audit_enabled=0 res=1 Feb 12 21:51:33.663595 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 21:51:33.663600 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 12 21:51:33.663606 kernel: cpuidle: using governor menu Feb 12 21:51:33.663612 kernel: Simple Boot Flag at 0x36 set to 0x80 Feb 12 21:51:33.663617 kernel: ACPI: bus type PCI registered Feb 12 21:51:33.663623 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 21:51:33.663629 kernel: dca service started, version 1.12.1 Feb 12 21:51:33.663635 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Feb 12 21:51:33.663641 kernel: PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in E820 Feb 12 21:51:33.663647 kernel: PCI: Using configuration type 1 for base access Feb 12 21:51:33.663653 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 12 21:51:33.663659 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 21:51:33.663664 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 21:51:33.663670 kernel: ACPI: Added _OSI(Module Device) Feb 12 21:51:33.663675 kernel: ACPI: Added _OSI(Processor Device) Feb 12 21:51:33.663681 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 21:51:33.663687 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 21:51:33.663693 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 21:51:33.663699 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 21:51:33.663705 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 21:51:33.672313 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 21:51:33.672335 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Feb 12 21:51:33.672344 kernel: ACPI: Interpreter enabled Feb 12 21:51:33.672353 kernel: ACPI: PM: (supports S0 S1 S5) Feb 12 21:51:33.672363 kernel: ACPI: Using IOAPIC for interrupt routing Feb 12 21:51:33.672371 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 12 21:51:33.672382 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Feb 12 21:51:33.672390 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Feb 12 21:51:33.672496 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 12 21:51:33.672556 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Feb 12 21:51:33.672603 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Feb 12 21:51:33.672611 kernel: PCI host bridge to bus 0000:00 Feb 12 21:51:33.672660 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 12 21:51:33.672705 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000cffff window] Feb 12 21:51:33.675858 kernel: pci_bus 0000:00: root bus resource [mem 0x000d0000-0x000d3fff window] Feb 12 21:51:33.675936 kernel: pci_bus 0000:00: root bus resource [mem 0x000d4000-0x000d7fff window] Feb 12 21:51:33.675980 kernel: pci_bus 0000:00: root bus resource [mem 0x000d8000-0x000dbfff window] Feb 12 21:51:33.676021 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 12 21:51:33.676060 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 12 21:51:33.676099 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Feb 12 21:51:33.676141 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Feb 12 21:51:33.676196 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Feb 12 21:51:33.676249 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Feb 12 21:51:33.676303 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Feb 12 21:51:33.676374 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Feb 12 21:51:33.676436 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Feb 12 21:51:33.676507 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 12 21:51:33.676578 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 12 21:51:33.676639 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 12 21:51:33.676687 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 12 21:51:33.676765 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Feb 12 21:51:33.676832 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Feb 12 21:51:33.676890 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Feb 12 21:51:33.676958 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Feb 12 21:51:33.677025 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Feb 12 21:51:33.677092 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Feb 12 21:51:33.677168 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Feb 12 21:51:33.677228 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Feb 12 21:51:33.677276 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Feb 12 21:51:33.677325 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Feb 12 21:51:33.677370 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Feb 12 21:51:33.677416 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 12 21:51:33.677472 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Feb 12 21:51:33.677549 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Feb 12 21:51:33.677615 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Feb 12 21:51:33.677671 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Feb 12 21:51:33.677747 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Feb 12 21:51:33.677822 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Feb 12 21:51:33.677878 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Feb 12 21:51:33.677931 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Feb 12 21:51:33.677989 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Feb 12 21:51:33.678069 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Feb 12 21:51:33.678140 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Feb 12 21:51:33.678206 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Feb 12 21:51:33.678271 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Feb 12 21:51:33.678339 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Feb 12 21:51:33.678401 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Feb 12 21:51:33.678465 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Feb 12 21:51:33.678522 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Feb 12 21:51:33.678595 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Feb 12 21:51:33.678666 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Feb 12 21:51:33.678748 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Feb 12 21:51:33.678812 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Feb 12 21:51:33.678871 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Feb 12 21:51:33.678918 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Feb 12 21:51:33.678985 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Feb 12 21:51:33.679046 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Feb 12 21:51:33.679104 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Feb 12 21:51:33.679163 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Feb 12 21:51:33.679237 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Feb 12 21:51:33.679305 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Feb 12 21:51:33.679375 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Feb 12 21:51:33.679441 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Feb 12 21:51:33.679507 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Feb 12 21:51:33.679572 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Feb 12 21:51:33.679635 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Feb 12 21:51:33.679702 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Feb 12 21:51:33.679789 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Feb 12 21:51:33.679849 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Feb 12 21:51:33.679902 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Feb 12 21:51:33.679948 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Feb 12 21:51:33.679997 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Feb 12 21:51:33.680045 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Feb 12 21:51:33.680094 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Feb 12 21:51:33.680139 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Feb 12 21:51:33.680191 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Feb 12 21:51:33.680235 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Feb 12 21:51:33.680285 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Feb 12 21:51:33.680334 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Feb 12 21:51:33.680384 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Feb 12 21:51:33.680450 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Feb 12 21:51:33.680517 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Feb 12 21:51:33.680584 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Feb 12 21:51:33.680657 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Feb 12 21:51:33.688761 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Feb 12 21:51:33.688847 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Feb 12 21:51:33.688900 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Feb 12 21:51:33.688959 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Feb 12 21:51:33.689009 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Feb 12 21:51:33.689060 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Feb 12 21:51:33.689112 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Feb 12 21:51:33.689169 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Feb 12 21:51:33.689216 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Feb 12 21:51:33.689266 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Feb 12 21:51:33.689312 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Feb 12 21:51:33.689361 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Feb 12 21:51:33.689407 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Feb 12 21:51:33.689460 kernel: pci_bus 0000:01: extended config space not accessible Feb 12 21:51:33.689509 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 12 21:51:33.689558 kernel: pci_bus 0000:02: extended config space not accessible Feb 12 21:51:33.689566 kernel: acpiphp: Slot [32] registered Feb 12 21:51:33.689572 kernel: acpiphp: Slot [33] registered Feb 12 21:51:33.689578 kernel: acpiphp: Slot [34] registered Feb 12 21:51:33.689584 kernel: acpiphp: Slot [35] registered Feb 12 21:51:33.689589 kernel: acpiphp: Slot [36] registered Feb 12 21:51:33.689597 kernel: acpiphp: Slot [37] registered Feb 12 21:51:33.689602 kernel: acpiphp: Slot [38] registered Feb 12 21:51:33.689608 kernel: acpiphp: Slot [39] registered Feb 12 21:51:33.689613 kernel: acpiphp: Slot [40] registered Feb 12 21:51:33.689619 kernel: acpiphp: Slot [41] registered Feb 12 21:51:33.689625 kernel: acpiphp: Slot [42] registered Feb 12 21:51:33.689630 kernel: acpiphp: Slot [43] registered Feb 12 21:51:33.689636 kernel: acpiphp: Slot [44] registered Feb 12 21:51:33.689642 kernel: acpiphp: Slot [45] registered Feb 12 21:51:33.689649 kernel: acpiphp: Slot [46] registered Feb 12 21:51:33.689655 kernel: acpiphp: Slot [47] registered Feb 12 21:51:33.689660 kernel: acpiphp: Slot [48] registered Feb 12 21:51:33.689666 kernel: acpiphp: Slot [49] registered Feb 12 21:51:33.689672 kernel: acpiphp: Slot [50] registered Feb 12 21:51:33.689678 kernel: acpiphp: Slot [51] registered Feb 12 21:51:33.689683 kernel: acpiphp: Slot [52] registered Feb 12 21:51:33.689689 kernel: acpiphp: Slot [53] registered Feb 12 21:51:33.689694 kernel: acpiphp: Slot [54] registered Feb 12 21:51:33.689700 kernel: acpiphp: Slot [55] registered Feb 12 21:51:33.689706 kernel: acpiphp: Slot [56] registered Feb 12 21:51:33.691783 kernel: acpiphp: Slot [57] registered Feb 12 21:51:33.691802 kernel: acpiphp: Slot [58] registered Feb 12 21:51:33.691808 kernel: acpiphp: Slot [59] registered Feb 12 21:51:33.691814 kernel: acpiphp: Slot [60] registered Feb 12 21:51:33.691820 kernel: acpiphp: Slot [61] registered Feb 12 21:51:33.691826 kernel: acpiphp: Slot [62] registered Feb 12 21:51:33.691831 kernel: acpiphp: Slot [63] registered Feb 12 21:51:33.691906 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Feb 12 21:51:33.691960 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Feb 12 21:51:33.692013 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Feb 12 21:51:33.692066 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Feb 12 21:51:33.692113 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Feb 12 21:51:33.692160 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000cffff window] (subtractive decode) Feb 12 21:51:33.692223 kernel: pci 0000:00:11.0: bridge window [mem 0x000d0000-0x000d3fff window] (subtractive decode) Feb 12 21:51:33.692269 kernel: pci 0000:00:11.0: bridge window [mem 0x000d4000-0x000d7fff window] (subtractive decode) Feb 12 21:51:33.692317 kernel: pci 0000:00:11.0: bridge window [mem 0x000d8000-0x000dbfff window] (subtractive decode) Feb 12 21:51:33.692362 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Feb 12 21:51:33.692408 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Feb 12 21:51:33.692452 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Feb 12 21:51:33.692506 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Feb 12 21:51:33.692561 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Feb 12 21:51:33.692609 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Feb 12 21:51:33.692678 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Feb 12 21:51:33.692747 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Feb 12 21:51:33.692795 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Feb 12 21:51:33.692844 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Feb 12 21:51:33.692890 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Feb 12 21:51:33.692935 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Feb 12 21:51:33.692982 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Feb 12 21:51:33.693028 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Feb 12 21:51:33.693076 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Feb 12 21:51:33.693121 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Feb 12 21:51:33.693169 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Feb 12 21:51:33.693214 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Feb 12 21:51:33.693258 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Feb 12 21:51:33.693302 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Feb 12 21:51:33.693348 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Feb 12 21:51:33.693392 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Feb 12 21:51:33.693440 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Feb 12 21:51:33.693487 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Feb 12 21:51:33.693532 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Feb 12 21:51:33.693585 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Feb 12 21:51:33.693636 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Feb 12 21:51:33.693686 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Feb 12 21:51:33.695001 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Feb 12 21:51:33.695065 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Feb 12 21:51:33.695116 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Feb 12 21:51:33.695164 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Feb 12 21:51:33.695213 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Feb 12 21:51:33.695260 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Feb 12 21:51:33.695309 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Feb 12 21:51:33.695364 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Feb 12 21:51:33.695413 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Feb 12 21:51:33.695461 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Feb 12 21:51:33.695508 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Feb 12 21:51:33.695556 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Feb 12 21:51:33.695604 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Feb 12 21:51:33.695671 kernel: pci 0000:0b:00.0: supports D1 D2 Feb 12 21:51:33.695743 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 12 21:51:33.695795 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Feb 12 21:51:33.695843 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Feb 12 21:51:33.695889 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Feb 12 21:51:33.695936 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Feb 12 21:51:33.695984 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Feb 12 21:51:33.696031 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Feb 12 21:51:33.696079 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Feb 12 21:51:33.696126 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Feb 12 21:51:33.696174 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Feb 12 21:51:33.696219 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Feb 12 21:51:33.696266 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Feb 12 21:51:33.696311 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Feb 12 21:51:33.696358 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Feb 12 21:51:33.696404 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Feb 12 21:51:33.696452 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Feb 12 21:51:33.696499 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Feb 12 21:51:33.696545 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Feb 12 21:51:33.696590 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Feb 12 21:51:33.696638 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Feb 12 21:51:33.696684 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Feb 12 21:51:33.696744 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Feb 12 21:51:33.696798 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Feb 12 21:51:33.696848 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Feb 12 21:51:33.696894 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Feb 12 21:51:33.696941 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Feb 12 21:51:33.696988 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Feb 12 21:51:33.697038 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Feb 12 21:51:33.697085 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Feb 12 21:51:33.697130 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Feb 12 21:51:33.697175 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Feb 12 21:51:33.697223 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Feb 12 21:51:33.697270 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Feb 12 21:51:33.697316 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Feb 12 21:51:33.697361 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Feb 12 21:51:33.697412 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Feb 12 21:51:33.697458 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Feb 12 21:51:33.697504 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Feb 12 21:51:33.697551 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Feb 12 21:51:33.697597 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Feb 12 21:51:33.697645 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Feb 12 21:51:33.697691 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Feb 12 21:51:33.697762 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Feb 12 21:51:33.697829 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Feb 12 21:51:33.697875 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Feb 12 21:51:33.697920 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Feb 12 21:51:33.697970 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Feb 12 21:51:33.698015 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Feb 12 21:51:33.698061 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Feb 12 21:51:33.698108 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Feb 12 21:51:33.698153 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Feb 12 21:51:33.698198 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Feb 12 21:51:33.698244 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Feb 12 21:51:33.698290 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Feb 12 21:51:33.698337 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Feb 12 21:51:33.698385 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Feb 12 21:51:33.698430 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Feb 12 21:51:33.698475 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Feb 12 21:51:33.698520 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Feb 12 21:51:33.698568 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Feb 12 21:51:33.698612 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Feb 12 21:51:33.698658 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Feb 12 21:51:33.698705 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Feb 12 21:51:33.698769 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Feb 12 21:51:33.698816 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Feb 12 21:51:33.698861 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Feb 12 21:51:33.698909 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Feb 12 21:51:33.698954 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Feb 12 21:51:33.699000 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Feb 12 21:51:33.699047 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Feb 12 21:51:33.699095 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Feb 12 21:51:33.699140 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Feb 12 21:51:33.699188 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Feb 12 21:51:33.699234 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Feb 12 21:51:33.699279 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Feb 12 21:51:33.699326 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Feb 12 21:51:33.699372 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Feb 12 21:51:33.699417 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Feb 12 21:51:33.699468 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Feb 12 21:51:33.699512 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Feb 12 21:51:33.699557 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Feb 12 21:51:33.699565 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Feb 12 21:51:33.699572 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Feb 12 21:51:33.699578 kernel: ACPI: PCI: Interrupt link LNKB disabled Feb 12 21:51:33.699583 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 12 21:51:33.699589 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Feb 12 21:51:33.699597 kernel: iommu: Default domain type: Translated Feb 12 21:51:33.699602 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 12 21:51:33.699648 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Feb 12 21:51:33.699694 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 12 21:51:33.699746 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Feb 12 21:51:33.699755 kernel: vgaarb: loaded Feb 12 21:51:33.699761 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 21:51:33.699767 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 21:51:33.699773 kernel: PTP clock support registered Feb 12 21:51:33.699780 kernel: PCI: Using ACPI for IRQ routing Feb 12 21:51:33.699786 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 12 21:51:33.699792 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Feb 12 21:51:33.699797 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Feb 12 21:51:33.699803 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Feb 12 21:51:33.699809 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Feb 12 21:51:33.699815 kernel: clocksource: Switched to clocksource tsc-early Feb 12 21:51:33.699820 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 21:51:33.699826 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 21:51:33.699833 kernel: pnp: PnP ACPI init Feb 12 21:51:33.699889 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Feb 12 21:51:33.699937 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Feb 12 21:51:33.699978 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Feb 12 21:51:33.700032 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Feb 12 21:51:33.700104 kernel: pnp 00:06: [dma 2] Feb 12 21:51:33.700173 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Feb 12 21:51:33.700227 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Feb 12 21:51:33.700279 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Feb 12 21:51:33.700288 kernel: pnp: PnP ACPI: found 8 devices Feb 12 21:51:33.700295 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 12 21:51:33.700301 kernel: NET: Registered PF_INET protocol family Feb 12 21:51:33.700306 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 21:51:33.700313 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 12 21:51:33.700318 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 21:51:33.700326 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 12 21:51:33.700332 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 12 21:51:33.700338 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 12 21:51:33.700344 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 12 21:51:33.700349 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 12 21:51:33.700355 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 21:51:33.700361 kernel: NET: Registered PF_XDP protocol family Feb 12 21:51:33.700416 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 12 21:51:33.700468 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 12 21:51:33.700517 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 12 21:51:33.700565 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 12 21:51:33.700612 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 12 21:51:33.700676 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Feb 12 21:51:33.700894 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Feb 12 21:51:33.700968 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Feb 12 21:51:33.701039 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Feb 12 21:51:33.701108 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Feb 12 21:51:33.701178 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Feb 12 21:51:33.701248 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Feb 12 21:51:33.701317 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Feb 12 21:51:33.701388 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Feb 12 21:51:33.701456 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Feb 12 21:51:33.701522 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Feb 12 21:51:33.701590 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Feb 12 21:51:33.701657 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Feb 12 21:51:33.701731 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Feb 12 21:51:33.701804 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Feb 12 21:51:33.701871 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Feb 12 21:51:33.701939 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Feb 12 21:51:33.702006 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Feb 12 21:51:33.702075 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Feb 12 21:51:33.702142 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Feb 12 21:51:33.702219 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.702286 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.702358 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.702425 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.702493 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.702560 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.702627 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.702694 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.703490 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.703545 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.703595 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.703642 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.703689 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.703982 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.704040 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.704093 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.704146 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.704213 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.704266 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.704538 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.704595 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.704648 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.704704 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.704770 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.704820 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.704867 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.704913 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.704958 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.705005 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.705050 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.705095 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.705140 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.705205 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.705272 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.705321 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.705366 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.705411 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.705456 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.705501 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.705546 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.705595 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.705640 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.705685 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.705739 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.705785 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.705831 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.705875 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.705920 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.705964 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.706011 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.706055 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.706100 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.706144 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.706199 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.706244 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.706290 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.706333 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.706390 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.706445 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.706490 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.706535 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.706580 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.706624 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.706669 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.706722 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.706773 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.706828 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.706874 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.706924 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.706970 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.707015 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.707061 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.707107 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.707153 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.707213 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.707258 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.707306 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.707354 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.707400 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.707445 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.707491 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.707536 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.707582 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Feb 12 21:51:33.707626 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Feb 12 21:51:33.707697 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 12 21:51:33.707753 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Feb 12 21:51:33.707815 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Feb 12 21:51:33.707862 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Feb 12 21:51:33.707907 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Feb 12 21:51:33.707957 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Feb 12 21:51:33.708004 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Feb 12 21:51:33.708049 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Feb 12 21:51:33.708095 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Feb 12 21:51:33.708140 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Feb 12 21:51:33.708196 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Feb 12 21:51:33.708254 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Feb 12 21:51:33.708301 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Feb 12 21:51:33.708346 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Feb 12 21:51:33.708392 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Feb 12 21:51:33.708437 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Feb 12 21:51:33.708481 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Feb 12 21:51:33.708526 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Feb 12 21:51:33.708570 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Feb 12 21:51:33.708615 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Feb 12 21:51:33.708661 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Feb 12 21:51:33.708709 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Feb 12 21:51:33.708781 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Feb 12 21:51:33.708839 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Feb 12 21:51:33.708886 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Feb 12 21:51:33.708943 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Feb 12 21:51:33.709000 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Feb 12 21:51:33.709047 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Feb 12 21:51:33.709094 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Feb 12 21:51:33.709139 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Feb 12 21:51:33.709194 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Feb 12 21:51:33.709241 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Feb 12 21:51:33.709286 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Feb 12 21:51:33.709335 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Feb 12 21:51:33.709382 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Feb 12 21:51:33.709429 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Feb 12 21:51:33.709475 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Feb 12 21:51:33.709521 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Feb 12 21:51:33.709568 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Feb 12 21:51:33.709614 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Feb 12 21:51:33.709659 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Feb 12 21:51:33.709704 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Feb 12 21:51:33.710091 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Feb 12 21:51:33.710144 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Feb 12 21:51:33.710428 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Feb 12 21:51:33.710484 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Feb 12 21:51:33.710532 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Feb 12 21:51:33.710578 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Feb 12 21:51:33.710878 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Feb 12 21:51:33.710936 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Feb 12 21:51:33.710985 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Feb 12 21:51:33.711033 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Feb 12 21:51:33.711102 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Feb 12 21:51:33.711358 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Feb 12 21:51:33.711410 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Feb 12 21:51:33.711459 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Feb 12 21:51:33.711527 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Feb 12 21:51:33.711817 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Feb 12 21:51:33.711869 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Feb 12 21:51:33.711917 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Feb 12 21:51:33.711985 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Feb 12 21:51:33.712252 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Feb 12 21:51:33.712304 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Feb 12 21:51:33.712560 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Feb 12 21:51:33.712618 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Feb 12 21:51:33.712669 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Feb 12 21:51:33.712722 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Feb 12 21:51:33.712775 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Feb 12 21:51:33.712821 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Feb 12 21:51:33.712868 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Feb 12 21:51:33.712913 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Feb 12 21:51:33.712958 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Feb 12 21:51:33.713003 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Feb 12 21:51:33.713052 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Feb 12 21:51:33.713096 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Feb 12 21:51:33.713142 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Feb 12 21:51:33.713187 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Feb 12 21:51:33.713232 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Feb 12 21:51:33.713276 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Feb 12 21:51:33.713323 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Feb 12 21:51:33.713368 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Feb 12 21:51:33.713413 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Feb 12 21:51:33.713458 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Feb 12 21:51:33.713505 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Feb 12 21:51:33.713550 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Feb 12 21:51:33.713596 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Feb 12 21:51:33.713641 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Feb 12 21:51:33.713686 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Feb 12 21:51:33.713746 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Feb 12 21:51:33.713797 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Feb 12 21:51:33.713843 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Feb 12 21:51:33.713888 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Feb 12 21:51:33.713938 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Feb 12 21:51:33.713984 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Feb 12 21:51:33.714029 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Feb 12 21:51:33.714074 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Feb 12 21:51:33.714120 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Feb 12 21:51:33.714166 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Feb 12 21:51:33.714211 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Feb 12 21:51:33.714257 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Feb 12 21:51:33.714302 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Feb 12 21:51:33.714347 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Feb 12 21:51:33.714395 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Feb 12 21:51:33.714440 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Feb 12 21:51:33.714485 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Feb 12 21:51:33.714530 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Feb 12 21:51:33.714576 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Feb 12 21:51:33.714622 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Feb 12 21:51:33.714668 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Feb 12 21:51:33.714722 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Feb 12 21:51:33.714774 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Feb 12 21:51:33.714822 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Feb 12 21:51:33.714868 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Feb 12 21:51:33.714912 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Feb 12 21:51:33.714957 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Feb 12 21:51:33.714997 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000cffff window] Feb 12 21:51:33.715037 kernel: pci_bus 0000:00: resource 6 [mem 0x000d0000-0x000d3fff window] Feb 12 21:51:33.715076 kernel: pci_bus 0000:00: resource 7 [mem 0x000d4000-0x000d7fff window] Feb 12 21:51:33.715116 kernel: pci_bus 0000:00: resource 8 [mem 0x000d8000-0x000dbfff window] Feb 12 21:51:33.715158 kernel: pci_bus 0000:00: resource 9 [mem 0xc0000000-0xfebfffff window] Feb 12 21:51:33.715198 kernel: pci_bus 0000:00: resource 10 [io 0x0000-0x0cf7 window] Feb 12 21:51:33.715239 kernel: pci_bus 0000:00: resource 11 [io 0x0d00-0xfeff window] Feb 12 21:51:33.715284 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Feb 12 21:51:33.715325 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Feb 12 21:51:33.715367 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Feb 12 21:51:33.715407 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Feb 12 21:51:33.715451 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000cffff window] Feb 12 21:51:33.715492 kernel: pci_bus 0000:02: resource 6 [mem 0x000d0000-0x000d3fff window] Feb 12 21:51:33.715534 kernel: pci_bus 0000:02: resource 7 [mem 0x000d4000-0x000d7fff window] Feb 12 21:51:33.715575 kernel: pci_bus 0000:02: resource 8 [mem 0x000d8000-0x000dbfff window] Feb 12 21:51:33.715616 kernel: pci_bus 0000:02: resource 9 [mem 0xc0000000-0xfebfffff window] Feb 12 21:51:33.715657 kernel: pci_bus 0000:02: resource 10 [io 0x0000-0x0cf7 window] Feb 12 21:51:33.715698 kernel: pci_bus 0000:02: resource 11 [io 0x0d00-0xfeff window] Feb 12 21:51:33.716080 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Feb 12 21:51:33.716438 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Feb 12 21:51:33.716509 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Feb 12 21:51:33.716574 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Feb 12 21:51:33.716630 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Feb 12 21:51:33.716723 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Feb 12 21:51:33.717156 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Feb 12 21:51:33.717500 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Feb 12 21:51:33.717565 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Feb 12 21:51:33.717631 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Feb 12 21:51:33.717691 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Feb 12 21:51:33.718119 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Feb 12 21:51:33.718190 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Feb 12 21:51:33.718262 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Feb 12 21:51:33.718322 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Feb 12 21:51:33.718372 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Feb 12 21:51:33.718414 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Feb 12 21:51:33.718461 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Feb 12 21:51:33.718503 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Feb 12 21:51:33.718554 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Feb 12 21:51:33.718597 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Feb 12 21:51:33.718639 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Feb 12 21:51:33.718686 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Feb 12 21:51:33.718736 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Feb 12 21:51:33.718779 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Feb 12 21:51:33.718824 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Feb 12 21:51:33.718879 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Feb 12 21:51:33.718927 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Feb 12 21:51:33.718973 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Feb 12 21:51:33.719015 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Feb 12 21:51:33.719060 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Feb 12 21:51:33.719102 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Feb 12 21:51:33.719149 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Feb 12 21:51:33.719193 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Feb 12 21:51:33.719241 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Feb 12 21:51:33.719283 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Feb 12 21:51:33.719328 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Feb 12 21:51:33.719370 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Feb 12 21:51:33.719415 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Feb 12 21:51:33.719459 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Feb 12 21:51:33.719501 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Feb 12 21:51:33.719546 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Feb 12 21:51:33.719587 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Feb 12 21:51:33.719629 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Feb 12 21:51:33.719673 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Feb 12 21:51:33.719729 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Feb 12 21:51:33.719775 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Feb 12 21:51:33.719821 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Feb 12 21:51:33.719864 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Feb 12 21:51:33.719912 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Feb 12 21:51:33.720173 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Feb 12 21:51:33.720223 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Feb 12 21:51:33.720270 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Feb 12 21:51:33.720337 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Feb 12 21:51:33.720599 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Feb 12 21:51:33.720653 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Feb 12 21:51:33.720921 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Feb 12 21:51:33.720976 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Feb 12 21:51:33.721021 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Feb 12 21:51:33.721063 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Feb 12 21:51:33.721109 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Feb 12 21:51:33.721152 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Feb 12 21:51:33.721218 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Feb 12 21:51:33.721488 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Feb 12 21:51:33.721541 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Feb 12 21:51:33.721589 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Feb 12 21:51:33.721633 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Feb 12 21:51:33.721678 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Feb 12 21:51:33.721733 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Feb 12 21:51:33.722118 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Feb 12 21:51:33.722168 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Feb 12 21:51:33.722217 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Feb 12 21:51:33.722528 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Feb 12 21:51:33.722580 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Feb 12 21:51:33.722625 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Feb 12 21:51:33.722677 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 12 21:51:33.722689 kernel: PCI: CLS 32 bytes, default 64 Feb 12 21:51:33.722697 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 12 21:51:33.722703 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Feb 12 21:51:33.722709 kernel: clocksource: Switched to clocksource tsc Feb 12 21:51:33.722742 kernel: Initialise system trusted keyrings Feb 12 21:51:33.722750 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 12 21:51:33.722756 kernel: Key type asymmetric registered Feb 12 21:51:33.722762 kernel: Asymmetric key parser 'x509' registered Feb 12 21:51:33.722768 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 21:51:33.722776 kernel: io scheduler mq-deadline registered Feb 12 21:51:33.722782 kernel: io scheduler kyber registered Feb 12 21:51:33.722788 kernel: io scheduler bfq registered Feb 12 21:51:33.722842 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Feb 12 21:51:33.722891 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:51:33.722939 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Feb 12 21:51:33.722986 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:51:33.723033 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Feb 12 21:51:33.723081 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:51:33.723130 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Feb 12 21:51:33.723177 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:51:33.723224 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Feb 12 21:51:33.723271 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:51:33.723319 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Feb 12 21:51:33.723368 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:51:33.723414 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Feb 12 21:51:33.723461 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:51:33.723508 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Feb 12 21:51:33.723554 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:51:33.723880 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Feb 12 21:51:33.723939 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:51:33.723989 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Feb 12 21:51:33.724055 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:51:33.724312 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Feb 12 21:51:33.724367 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:51:33.724415 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Feb 12 21:51:33.724465 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:51:33.724534 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Feb 12 21:51:33.724840 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:51:33.725102 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Feb 12 21:51:33.725155 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:51:33.725488 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Feb 12 21:51:33.725549 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:51:33.725600 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Feb 12 21:51:33.725647 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:51:33.725832 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Feb 12 21:51:33.725885 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:51:33.726226 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Feb 12 21:51:33.726284 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:51:33.726334 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Feb 12 21:51:33.726381 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:51:33.726429 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Feb 12 21:51:33.726475 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:51:33.726521 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Feb 12 21:51:33.726571 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:51:33.726617 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Feb 12 21:51:33.726664 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:51:33.726709 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Feb 12 21:51:33.726787 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:51:33.726838 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Feb 12 21:51:33.726884 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:51:33.726930 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Feb 12 21:51:33.727114 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:51:33.727365 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Feb 12 21:51:33.727418 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:51:33.727759 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Feb 12 21:51:33.727828 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:51:33.727893 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Feb 12 21:51:33.727940 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:51:33.727985 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Feb 12 21:51:33.728191 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:51:33.728244 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Feb 12 21:51:33.728572 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:51:33.728629 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Feb 12 21:51:33.728679 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:51:33.728751 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Feb 12 21:51:33.728803 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 12 21:51:33.728812 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 12 21:51:33.728818 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 21:51:33.728825 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 12 21:51:33.728831 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Feb 12 21:51:33.728837 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 12 21:51:33.728843 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 12 21:51:33.728893 kernel: rtc_cmos 00:01: registered as rtc0 Feb 12 21:51:33.728938 kernel: rtc_cmos 00:01: setting system clock to 2024-02-12T21:51:33 UTC (1707774693) Feb 12 21:51:33.729122 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Feb 12 21:51:33.729133 kernel: fail to initialize ptp_kvm Feb 12 21:51:33.729331 kernel: intel_pstate: CPU model not supported Feb 12 21:51:33.729341 kernel: NET: Registered PF_INET6 protocol family Feb 12 21:51:33.729348 kernel: Segment Routing with IPv6 Feb 12 21:51:33.729354 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 21:51:33.729360 kernel: NET: Registered PF_PACKET protocol family Feb 12 21:51:33.729368 kernel: Key type dns_resolver registered Feb 12 21:51:33.729374 kernel: IPI shorthand broadcast: enabled Feb 12 21:51:33.729380 kernel: sched_clock: Marking stable (862327963, 227648783)->(1158768607, -68791861) Feb 12 21:51:33.729387 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 12 21:51:33.729393 kernel: registered taskstats version 1 Feb 12 21:51:33.729399 kernel: Loading compiled-in X.509 certificates Feb 12 21:51:33.729405 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 12 21:51:33.729411 kernel: Key type .fscrypt registered Feb 12 21:51:33.729417 kernel: Key type fscrypt-provisioning registered Feb 12 21:51:33.729425 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 21:51:33.729431 kernel: ima: Allocated hash algorithm: sha1 Feb 12 21:51:33.729437 kernel: ima: No architecture policies found Feb 12 21:51:33.729443 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 12 21:51:33.729449 kernel: Write protecting the kernel read-only data: 28672k Feb 12 21:51:33.729455 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 12 21:51:33.729461 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 12 21:51:33.729761 kernel: Run /init as init process Feb 12 21:51:33.729770 kernel: with arguments: Feb 12 21:51:33.729779 kernel: /init Feb 12 21:51:33.729785 kernel: with environment: Feb 12 21:51:33.729790 kernel: HOME=/ Feb 12 21:51:33.729796 kernel: TERM=linux Feb 12 21:51:33.729802 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 21:51:33.729810 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 21:51:33.729819 systemd[1]: Detected virtualization vmware. Feb 12 21:51:33.729826 systemd[1]: Detected architecture x86-64. Feb 12 21:51:33.729833 systemd[1]: Running in initrd. Feb 12 21:51:33.729839 systemd[1]: No hostname configured, using default hostname. Feb 12 21:51:33.729845 systemd[1]: Hostname set to . Feb 12 21:51:33.729852 systemd[1]: Initializing machine ID from random generator. Feb 12 21:51:33.729858 systemd[1]: Queued start job for default target initrd.target. Feb 12 21:51:33.729865 systemd[1]: Started systemd-ask-password-console.path. Feb 12 21:51:33.729871 systemd[1]: Reached target cryptsetup.target. Feb 12 21:51:33.729877 systemd[1]: Reached target paths.target. Feb 12 21:51:33.729884 systemd[1]: Reached target slices.target. Feb 12 21:51:33.729891 systemd[1]: Reached target swap.target. Feb 12 21:51:33.729897 systemd[1]: Reached target timers.target. Feb 12 21:51:33.729903 systemd[1]: Listening on iscsid.socket. Feb 12 21:51:33.729910 systemd[1]: Listening on iscsiuio.socket. Feb 12 21:51:33.729917 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 21:51:33.729923 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 21:51:33.729929 systemd[1]: Listening on systemd-journald.socket. Feb 12 21:51:33.729936 systemd[1]: Listening on systemd-networkd.socket. Feb 12 21:51:33.729943 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 21:51:33.729949 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 21:51:33.729955 systemd[1]: Reached target sockets.target. Feb 12 21:51:33.729962 systemd[1]: Starting kmod-static-nodes.service... Feb 12 21:51:33.730072 systemd[1]: Finished network-cleanup.service. Feb 12 21:51:33.730081 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 21:51:33.730088 systemd[1]: Starting systemd-journald.service... Feb 12 21:51:33.730094 systemd[1]: Starting systemd-modules-load.service... Feb 12 21:51:33.730102 systemd[1]: Starting systemd-resolved.service... Feb 12 21:51:33.730109 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 21:51:33.730115 systemd[1]: Finished kmod-static-nodes.service. Feb 12 21:51:33.730121 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 21:51:33.730128 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 21:51:33.730134 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 21:51:33.730140 kernel: audit: type=1130 audit(1707774693.662:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:33.730147 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 21:51:33.730155 kernel: audit: type=1130 audit(1707774693.667:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:33.730161 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 21:51:33.730167 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 21:51:33.730174 kernel: audit: type=1130 audit(1707774693.681:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:33.730180 systemd[1]: Starting dracut-cmdline.service... Feb 12 21:51:33.730186 systemd[1]: Started systemd-resolved.service. Feb 12 21:51:33.730192 systemd[1]: Reached target nss-lookup.target. Feb 12 21:51:33.730469 kernel: audit: type=1130 audit(1707774693.695:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:33.730477 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 21:51:33.730484 kernel: Bridge firewalling registered Feb 12 21:51:33.730489 kernel: SCSI subsystem initialized Feb 12 21:51:33.730499 systemd-journald[217]: Journal started Feb 12 21:51:33.730534 systemd-journald[217]: Runtime Journal (/run/log/journal/83a9b77e6b3f4ea3902b86aa68be99ac) is 4.8M, max 38.8M, 34.0M free. Feb 12 21:51:33.733932 systemd[1]: Started systemd-journald.service. Feb 12 21:51:33.733950 kernel: audit: type=1130 audit(1707774693.730:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:33.733962 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 21:51:33.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:33.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:33.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:33.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:33.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:33.664605 systemd-modules-load[218]: Inserted module 'overlay' Feb 12 21:51:33.737721 kernel: device-mapper: uevent: version 1.0.3 Feb 12 21:51:33.737735 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 21:51:33.694148 systemd-resolved[219]: Positive Trust Anchors: Feb 12 21:51:33.694154 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 21:51:33.742607 kernel: audit: type=1130 audit(1707774693.736:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:33.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:33.694174 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 21:51:33.695857 systemd-resolved[219]: Defaulting to hostname 'linux'. Feb 12 21:51:33.708239 systemd-modules-load[218]: Inserted module 'br_netfilter' Feb 12 21:51:33.738161 systemd-modules-load[218]: Inserted module 'dm_multipath' Feb 12 21:51:33.738499 systemd[1]: Finished systemd-modules-load.service. Feb 12 21:51:33.744156 dracut-cmdline[232]: dracut-dracut-053 Feb 12 21:51:33.744156 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Feb 12 21:51:33.744156 dracut-cmdline[232]: BEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 21:51:33.739027 systemd[1]: Starting systemd-sysctl.service... Feb 12 21:51:33.749070 kernel: audit: type=1130 audit(1707774693.744:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:33.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:33.745986 systemd[1]: Finished systemd-sysctl.service. Feb 12 21:51:33.761731 kernel: Loading iSCSI transport class v2.0-870. Feb 12 21:51:33.769763 kernel: iscsi: registered transport (tcp) Feb 12 21:51:33.784741 kernel: iscsi: registered transport (qla4xxx) Feb 12 21:51:33.784780 kernel: QLogic iSCSI HBA Driver Feb 12 21:51:33.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:33.802979 systemd[1]: Finished dracut-cmdline.service. Feb 12 21:51:33.803627 systemd[1]: Starting dracut-pre-udev.service... Feb 12 21:51:33.806733 kernel: audit: type=1130 audit(1707774693.801:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:33.840736 kernel: raid6: avx2x4 gen() 48423 MB/s Feb 12 21:51:33.857731 kernel: raid6: avx2x4 xor() 21235 MB/s Feb 12 21:51:33.874735 kernel: raid6: avx2x2 gen() 52410 MB/s Feb 12 21:51:33.891730 kernel: raid6: avx2x2 xor() 30465 MB/s Feb 12 21:51:33.908731 kernel: raid6: avx2x1 gen() 38825 MB/s Feb 12 21:51:33.925736 kernel: raid6: avx2x1 xor() 25070 MB/s Feb 12 21:51:33.942725 kernel: raid6: sse2x4 gen() 20742 MB/s Feb 12 21:51:33.959724 kernel: raid6: sse2x4 xor() 11947 MB/s Feb 12 21:51:33.976724 kernel: raid6: sse2x2 gen() 21487 MB/s Feb 12 21:51:33.993728 kernel: raid6: sse2x2 xor() 13106 MB/s Feb 12 21:51:34.010737 kernel: raid6: sse2x1 gen() 16444 MB/s Feb 12 21:51:34.027939 kernel: raid6: sse2x1 xor() 8675 MB/s Feb 12 21:51:34.027963 kernel: raid6: using algorithm avx2x2 gen() 52410 MB/s Feb 12 21:51:34.027971 kernel: raid6: .... xor() 30465 MB/s, rmw enabled Feb 12 21:51:34.029147 kernel: raid6: using avx2x2 recovery algorithm Feb 12 21:51:34.037725 kernel: xor: automatically using best checksumming function avx Feb 12 21:51:34.099734 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 12 21:51:34.104745 systemd[1]: Finished dracut-pre-udev.service. Feb 12 21:51:34.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:34.105424 systemd[1]: Starting systemd-udevd.service... Feb 12 21:51:34.103000 audit: BPF prog-id=7 op=LOAD Feb 12 21:51:34.103000 audit: BPF prog-id=8 op=LOAD Feb 12 21:51:34.108735 kernel: audit: type=1130 audit(1707774694.103:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:34.116843 systemd-udevd[415]: Using default interface naming scheme 'v252'. Feb 12 21:51:34.120492 systemd[1]: Started systemd-udevd.service. Feb 12 21:51:34.121131 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 21:51:34.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:34.129438 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Feb 12 21:51:34.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:34.148466 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 21:51:34.149000 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 21:51:34.209340 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 21:51:34.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:34.286736 kernel: VMware PVSCSI driver - version 1.0.7.0-k Feb 12 21:51:34.288529 kernel: vmw_pvscsi: using 64bit dma Feb 12 21:51:34.288552 kernel: vmw_pvscsi: max_id: 16 Feb 12 21:51:34.288565 kernel: vmw_pvscsi: setting ring_pages to 8 Feb 12 21:51:34.290796 kernel: VMware vmxnet3 virtual NIC driver - version 1.6.0.0-k-NAPI Feb 12 21:51:34.296122 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Feb 12 21:51:34.301669 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Feb 12 21:51:34.301768 kernel: vmw_pvscsi: enabling reqCallThreshold Feb 12 21:51:34.301777 kernel: vmw_pvscsi: driver-based request coalescing enabled Feb 12 21:51:34.301787 kernel: vmw_pvscsi: using MSI-X Feb 12 21:51:34.312726 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 21:51:34.314727 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Feb 12 21:51:34.317724 kernel: libata version 3.00 loaded. Feb 12 21:51:34.319729 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Feb 12 21:51:34.319810 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Feb 12 21:51:34.319870 kernel: ata_piix 0000:00:07.1: version 2.13 Feb 12 21:51:34.319923 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Feb 12 21:51:34.327284 kernel: scsi host1: ata_piix Feb 12 21:51:34.332755 kernel: AVX2 version of gcm_enc/dec engaged. Feb 12 21:51:34.332779 kernel: AES CTR mode by8 optimization enabled Feb 12 21:51:34.339162 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Feb 12 21:51:34.339298 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 12 21:51:34.339384 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Feb 12 21:51:34.339466 kernel: sd 0:0:0:0: [sda] Cache data unavailable Feb 12 21:51:34.339552 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Feb 12 21:51:34.341792 kernel: scsi host2: ata_piix Feb 12 21:51:34.341900 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Feb 12 21:51:34.341911 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Feb 12 21:51:34.345732 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 21:51:34.345760 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 12 21:51:34.506772 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Feb 12 21:51:34.512744 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Feb 12 21:51:34.539766 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Feb 12 21:51:34.539918 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 12 21:51:34.555733 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 12 21:51:34.591731 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (469) Feb 12 21:51:34.594120 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 21:51:34.597339 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 21:51:34.599629 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 21:51:34.602043 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 21:51:34.602296 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 21:51:34.603136 systemd[1]: Starting disk-uuid.service... Feb 12 21:51:34.626729 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 21:51:35.639535 disk-uuid[549]: The operation has completed successfully. Feb 12 21:51:35.639816 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 21:51:35.738099 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 21:51:35.738397 systemd[1]: Finished disk-uuid.service. Feb 12 21:51:35.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:35.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:35.739199 systemd[1]: Starting verity-setup.service... Feb 12 21:51:35.750754 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 12 21:51:35.856994 systemd[1]: Found device dev-mapper-usr.device. Feb 12 21:51:35.858134 systemd[1]: Mounting sysusr-usr.mount... Feb 12 21:51:35.859316 systemd[1]: Finished verity-setup.service. Feb 12 21:51:35.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:35.959857 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 21:51:35.960151 systemd[1]: Mounted sysusr-usr.mount. Feb 12 21:51:35.960754 systemd[1]: Starting afterburn-network-kargs.service... Feb 12 21:51:35.961207 systemd[1]: Starting ignition-setup.service... Feb 12 21:51:36.016113 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 21:51:36.016148 kernel: BTRFS info (device sda6): using free space tree Feb 12 21:51:36.016156 kernel: BTRFS info (device sda6): has skinny extents Feb 12 21:51:36.028729 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 12 21:51:36.036485 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 21:51:36.041890 systemd[1]: Finished ignition-setup.service. Feb 12 21:51:36.042496 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 21:51:36.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:36.143333 systemd[1]: Finished afterburn-network-kargs.service. Feb 12 21:51:36.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:36.144040 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 21:51:36.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:36.192000 audit: BPF prog-id=9 op=LOAD Feb 12 21:51:36.193829 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 21:51:36.194822 systemd[1]: Starting systemd-networkd.service... Feb 12 21:51:36.208602 systemd-networkd[734]: lo: Link UP Feb 12 21:51:36.208608 systemd-networkd[734]: lo: Gained carrier Feb 12 21:51:36.208867 systemd-networkd[734]: Enumeration completed Feb 12 21:51:36.209060 systemd-networkd[734]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Feb 12 21:51:36.211732 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Feb 12 21:51:36.211838 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Feb 12 21:51:36.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:36.209326 systemd[1]: Started systemd-networkd.service. Feb 12 21:51:36.209470 systemd[1]: Reached target network.target. Feb 12 21:51:36.210091 systemd[1]: Starting iscsiuio.service... Feb 12 21:51:36.211493 systemd-networkd[734]: ens192: Link UP Feb 12 21:51:36.211495 systemd-networkd[734]: ens192: Gained carrier Feb 12 21:51:36.214680 systemd[1]: Started iscsiuio.service. Feb 12 21:51:36.215405 systemd[1]: Starting iscsid.service... Feb 12 21:51:36.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:36.217755 iscsid[739]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 21:51:36.217755 iscsid[739]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 21:51:36.217755 iscsid[739]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 21:51:36.217755 iscsid[739]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 21:51:36.217755 iscsid[739]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 21:51:36.217755 iscsid[739]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 21:51:36.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:36.218872 systemd[1]: Started iscsid.service. Feb 12 21:51:36.219560 systemd[1]: Starting dracut-initqueue.service... Feb 12 21:51:36.226942 systemd[1]: Finished dracut-initqueue.service. Feb 12 21:51:36.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:36.227251 systemd[1]: Reached target remote-fs-pre.target. Feb 12 21:51:36.227460 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 21:51:36.227671 systemd[1]: Reached target remote-fs.target. Feb 12 21:51:36.228293 systemd[1]: Starting dracut-pre-mount.service... Feb 12 21:51:36.233145 systemd[1]: Finished dracut-pre-mount.service. Feb 12 21:51:36.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:36.277602 ignition[606]: Ignition 2.14.0 Feb 12 21:51:36.277612 ignition[606]: Stage: fetch-offline Feb 12 21:51:36.277645 ignition[606]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 21:51:36.277660 ignition[606]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Feb 12 21:51:36.287578 ignition[606]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 12 21:51:36.287887 ignition[606]: parsed url from cmdline: "" Feb 12 21:51:36.287934 ignition[606]: no config URL provided Feb 12 21:51:36.288052 ignition[606]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 21:51:36.288218 ignition[606]: no config at "/usr/lib/ignition/user.ign" Feb 12 21:51:36.288797 ignition[606]: config successfully fetched Feb 12 21:51:36.288866 ignition[606]: parsing config with SHA512: 44c9d4a9af28b06075b9d1950945e2af03a67c29e5964d8816b64863cc912e7e0c41048a7c52bedde4dc91eae3a48fd4d5df177568cb527d079c1332b7f85cb7 Feb 12 21:51:36.320186 unknown[606]: fetched base config from "system" Feb 12 21:51:36.320194 unknown[606]: fetched user config from "vmware" Feb 12 21:51:36.320677 ignition[606]: fetch-offline: fetch-offline passed Feb 12 21:51:36.320735 ignition[606]: Ignition finished successfully Feb 12 21:51:36.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:36.321477 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 21:51:36.321620 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 12 21:51:36.322096 systemd[1]: Starting ignition-kargs.service... Feb 12 21:51:36.327431 ignition[754]: Ignition 2.14.0 Feb 12 21:51:36.327643 ignition[754]: Stage: kargs Feb 12 21:51:36.327827 ignition[754]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 21:51:36.327980 ignition[754]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Feb 12 21:51:36.329296 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 12 21:51:36.331009 ignition[754]: kargs: kargs passed Feb 12 21:51:36.331048 ignition[754]: Ignition finished successfully Feb 12 21:51:36.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:36.331991 systemd[1]: Finished ignition-kargs.service. Feb 12 21:51:36.332567 systemd[1]: Starting ignition-disks.service... Feb 12 21:51:36.336993 ignition[761]: Ignition 2.14.0 Feb 12 21:51:36.337000 ignition[761]: Stage: disks Feb 12 21:51:36.337063 ignition[761]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 21:51:36.337073 ignition[761]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Feb 12 21:51:36.338378 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 12 21:51:36.340280 ignition[761]: disks: disks passed Feb 12 21:51:36.340310 ignition[761]: Ignition finished successfully Feb 12 21:51:36.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:36.341013 systemd[1]: Finished ignition-disks.service. Feb 12 21:51:36.341171 systemd[1]: Reached target initrd-root-device.target. Feb 12 21:51:36.341266 systemd[1]: Reached target local-fs-pre.target. Feb 12 21:51:36.341351 systemd[1]: Reached target local-fs.target. Feb 12 21:51:36.341433 systemd[1]: Reached target sysinit.target. Feb 12 21:51:36.341512 systemd[1]: Reached target basic.target. Feb 12 21:51:36.342085 systemd[1]: Starting systemd-fsck-root.service... Feb 12 21:51:36.363808 systemd-fsck[769]: ROOT: clean, 602/1628000 files, 124050/1617920 blocks Feb 12 21:51:36.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:36.365358 systemd[1]: Finished systemd-fsck-root.service. Feb 12 21:51:36.365958 systemd[1]: Mounting sysroot.mount... Feb 12 21:51:36.375565 systemd[1]: Mounted sysroot.mount. Feb 12 21:51:36.375730 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 21:51:36.375876 systemd[1]: Reached target initrd-root-fs.target. Feb 12 21:51:36.376956 systemd[1]: Mounting sysroot-usr.mount... Feb 12 21:51:36.377494 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 12 21:51:36.377701 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 21:51:36.378000 systemd[1]: Reached target ignition-diskful.target. Feb 12 21:51:36.379118 systemd[1]: Mounted sysroot-usr.mount. Feb 12 21:51:36.379884 systemd[1]: Starting initrd-setup-root.service... Feb 12 21:51:36.382828 initrd-setup-root[779]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 21:51:36.386405 initrd-setup-root[787]: cut: /sysroot/etc/group: No such file or directory Feb 12 21:51:36.388903 initrd-setup-root[795]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 21:51:36.391351 initrd-setup-root[803]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 21:51:36.469359 systemd[1]: Finished initrd-setup-root.service. Feb 12 21:51:36.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:36.470004 systemd[1]: Starting ignition-mount.service... Feb 12 21:51:36.470546 systemd[1]: Starting sysroot-boot.service... Feb 12 21:51:36.474635 bash[820]: umount: /sysroot/usr/share/oem: not mounted. Feb 12 21:51:36.480481 ignition[821]: INFO : Ignition 2.14.0 Feb 12 21:51:36.480758 ignition[821]: INFO : Stage: mount Feb 12 21:51:36.480952 ignition[821]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 21:51:36.481128 ignition[821]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Feb 12 21:51:36.482711 ignition[821]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 12 21:51:36.484689 ignition[821]: INFO : mount: mount passed Feb 12 21:51:36.484860 ignition[821]: INFO : Ignition finished successfully Feb 12 21:51:36.485464 systemd[1]: Finished ignition-mount.service. Feb 12 21:51:36.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:36.491847 systemd[1]: Finished sysroot-boot.service. Feb 12 21:51:36.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:36.920471 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 21:51:36.929731 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (830) Feb 12 21:51:36.932153 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 21:51:36.932184 kernel: BTRFS info (device sda6): using free space tree Feb 12 21:51:36.932192 kernel: BTRFS info (device sda6): has skinny extents Feb 12 21:51:36.935734 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 12 21:51:36.936817 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 21:51:36.937394 systemd[1]: Starting ignition-files.service... Feb 12 21:51:36.947157 ignition[850]: INFO : Ignition 2.14.0 Feb 12 21:51:36.947157 ignition[850]: INFO : Stage: files Feb 12 21:51:36.947533 ignition[850]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 21:51:36.947533 ignition[850]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Feb 12 21:51:36.948545 ignition[850]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 12 21:51:36.952934 ignition[850]: DEBUG : files: compiled without relabeling support, skipping Feb 12 21:51:36.961217 ignition[850]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 21:51:36.961217 ignition[850]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 21:51:36.979660 ignition[850]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 21:51:36.979970 ignition[850]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 21:51:36.984012 unknown[850]: wrote ssh authorized keys file for user: core Feb 12 21:51:36.984547 ignition[850]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 21:51:36.985068 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 21:51:36.985068 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 12 21:51:37.019515 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 12 21:51:37.077959 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 21:51:37.078291 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 21:51:37.078823 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 21:51:37.079049 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 21:51:37.079350 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 12 21:51:37.318955 systemd-networkd[734]: ens192: Gained IPv6LL Feb 12 21:51:37.567425 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 21:51:37.660505 ignition[850]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 12 21:51:37.660819 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 21:51:37.660819 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 21:51:37.660819 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 12 21:51:38.082468 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 12 21:51:38.138903 ignition[850]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 12 21:51:38.139212 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 21:51:38.139212 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 21:51:38.139212 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 21:51:38.139705 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 21:51:38.139705 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 12 21:51:38.205218 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 12 21:51:38.658807 ignition[850]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 12 21:51:38.659116 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 21:51:38.659116 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubectl" Feb 12 21:51:38.659116 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 12 21:51:38.707089 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 12 21:51:38.870490 ignition[850]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 12 21:51:38.870814 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 12 21:51:38.870814 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 21:51:38.870814 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 12 21:51:38.917248 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 12 21:51:39.076672 ignition[850]: DEBUG : files: createFilesystemsFiles: createFiles: op(a): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 12 21:51:39.077033 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 21:51:39.077225 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 21:51:39.077430 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 12 21:51:39.482558 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 12 21:51:39.554089 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 21:51:39.554363 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/install.sh" Feb 12 21:51:39.554696 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 21:51:39.554896 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 12 21:51:39.555120 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 12 21:51:39.555314 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 21:51:39.555537 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 21:51:39.555736 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 21:51:39.555955 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 21:51:39.559247 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 21:51:39.559469 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 21:51:39.564298 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Feb 12 21:51:39.564509 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(11): oem config not found in "/usr/share/oem", looking on oem partition Feb 12 21:51:39.574909 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2861655484" Feb 12 21:51:39.576329 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (853) Feb 12 21:51:39.576348 ignition[850]: CRITICAL : files: createFilesystemsFiles: createFiles: op(11): op(12): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2861655484": device or resource busy Feb 12 21:51:39.576348 ignition[850]: ERROR : files: createFilesystemsFiles: createFiles: op(11): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2861655484", trying btrfs: device or resource busy Feb 12 21:51:39.576348 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(13): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2861655484" Feb 12 21:51:39.576348 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(13): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2861655484" Feb 12 21:51:39.587849 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(14): [started] unmounting "/mnt/oem2861655484" Feb 12 21:51:39.588057 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(14): [finished] unmounting "/mnt/oem2861655484" Feb 12 21:51:39.588246 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Feb 12 21:51:39.588636 systemd[1]: mnt-oem2861655484.mount: Deactivated successfully. Feb 12 21:51:39.597938 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(15): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Feb 12 21:51:39.598302 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(15): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Feb 12 21:51:39.598508 ignition[850]: INFO : files: op(16): [started] processing unit "vmtoolsd.service" Feb 12 21:51:39.598655 ignition[850]: INFO : files: op(16): [finished] processing unit "vmtoolsd.service" Feb 12 21:51:39.598812 ignition[850]: INFO : files: op(17): [started] processing unit "prepare-cni-plugins.service" Feb 12 21:51:39.598987 ignition[850]: INFO : files: op(17): op(18): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 21:51:39.599331 ignition[850]: INFO : files: op(17): op(18): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 21:51:39.599331 ignition[850]: INFO : files: op(17): [finished] processing unit "prepare-cni-plugins.service" Feb 12 21:51:39.599331 ignition[850]: INFO : files: op(19): [started] processing unit "prepare-critools.service" Feb 12 21:51:39.599331 ignition[850]: INFO : files: op(19): op(1a): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 21:51:39.599331 ignition[850]: INFO : files: op(19): op(1a): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 21:51:39.599331 ignition[850]: INFO : files: op(19): [finished] processing unit "prepare-critools.service" Feb 12 21:51:39.599331 ignition[850]: INFO : files: op(1b): [started] processing unit "prepare-helm.service" Feb 12 21:51:39.600694 ignition[850]: INFO : files: op(1b): op(1c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 21:51:39.600694 ignition[850]: INFO : files: op(1b): op(1c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 21:51:39.600694 ignition[850]: INFO : files: op(1b): [finished] processing unit "prepare-helm.service" Feb 12 21:51:39.600694 ignition[850]: INFO : files: op(1d): [started] processing unit "coreos-metadata.service" Feb 12 21:51:39.600694 ignition[850]: INFO : files: op(1d): op(1e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 21:51:39.600694 ignition[850]: INFO : files: op(1d): op(1e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 21:51:39.600694 ignition[850]: INFO : files: op(1d): [finished] processing unit "coreos-metadata.service" Feb 12 21:51:39.600694 ignition[850]: INFO : files: op(1f): [started] processing unit "containerd.service" Feb 12 21:51:39.600694 ignition[850]: INFO : files: op(1f): op(20): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 21:51:39.600694 ignition[850]: INFO : files: op(1f): op(20): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 21:51:39.600694 ignition[850]: INFO : files: op(1f): [finished] processing unit "containerd.service" Feb 12 21:51:39.600694 ignition[850]: INFO : files: op(21): [started] setting preset to disabled for "coreos-metadata.service" Feb 12 21:51:39.600694 ignition[850]: INFO : files: op(21): op(22): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 21:51:39.932536 ignition[850]: INFO : files: op(21): op(22): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 21:51:39.932771 ignition[850]: INFO : files: op(21): [finished] setting preset to disabled for "coreos-metadata.service" Feb 12 21:51:39.932771 ignition[850]: INFO : files: op(23): [started] setting preset to enabled for "vmtoolsd.service" Feb 12 21:51:39.932771 ignition[850]: INFO : files: op(23): [finished] setting preset to enabled for "vmtoolsd.service" Feb 12 21:51:39.932771 ignition[850]: INFO : files: op(24): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 21:51:39.932771 ignition[850]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 21:51:39.932771 ignition[850]: INFO : files: op(25): [started] setting preset to enabled for "prepare-critools.service" Feb 12 21:51:39.932771 ignition[850]: INFO : files: op(25): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 21:51:39.932771 ignition[850]: INFO : files: op(26): [started] setting preset to enabled for "prepare-helm.service" Feb 12 21:51:39.932771 ignition[850]: INFO : files: op(26): [finished] setting preset to enabled for "prepare-helm.service" Feb 12 21:51:39.932771 ignition[850]: INFO : files: createResultFile: createFiles: op(27): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 21:51:39.940535 kernel: kauditd_printk_skb: 24 callbacks suppressed Feb 12 21:51:39.940552 kernel: audit: type=1130 audit(1707774699.932:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:39.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:39.933935 systemd[1]: Finished ignition-files.service. Feb 12 21:51:39.944772 kernel: audit: type=1130 audit(1707774699.939:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:39.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:39.944817 ignition[850]: INFO : files: createResultFile: createFiles: op(27): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 21:51:39.944817 ignition[850]: INFO : files: files passed Feb 12 21:51:39.944817 ignition[850]: INFO : Ignition finished successfully Feb 12 21:51:39.935260 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 21:51:39.938280 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 21:51:39.945668 initrd-setup-root-after-ignition[875]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 21:51:39.938649 systemd[1]: Starting ignition-quench.service... Feb 12 21:51:39.940767 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 21:51:39.940809 systemd[1]: Finished ignition-quench.service. Feb 12 21:51:39.941060 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 21:51:39.941188 systemd[1]: Reached target ignition-complete.target. Feb 12 21:51:39.941775 systemd[1]: Starting initrd-parse-etc.service... Feb 12 21:51:39.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:39.951571 kernel: audit: type=1131 audit(1707774699.939:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:39.951593 kernel: audit: type=1130 audit(1707774699.939:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:39.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:39.952409 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 21:51:39.952478 systemd[1]: Finished initrd-parse-etc.service. Feb 12 21:51:39.952801 systemd[1]: Reached target initrd-fs.target. Feb 12 21:51:39.957677 kernel: audit: type=1130 audit(1707774699.950:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:39.957694 kernel: audit: type=1131 audit(1707774699.950:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:39.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:39.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:39.952920 systemd[1]: Reached target initrd.target. Feb 12 21:51:39.957897 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 21:51:39.958494 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 21:51:39.965984 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 21:51:39.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:39.970241 systemd[1]: Starting initrd-cleanup.service... Feb 12 21:51:39.970767 kernel: audit: type=1130 audit(1707774699.964:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:39.973190 systemd[1]: Stopped target network.target. Feb 12 21:51:39.973667 systemd[1]: Stopped target nss-lookup.target. Feb 12 21:51:39.973843 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 21:51:39.974034 systemd[1]: Stopped target timers.target. Feb 12 21:51:39.974201 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 21:51:39.974291 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 21:51:39.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:39.974614 systemd[1]: Stopped target initrd.target. Feb 12 21:51:39.977734 kernel: audit: type=1131 audit(1707774699.972:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:39.977857 systemd[1]: Stopped target basic.target. Feb 12 21:51:39.978131 systemd[1]: Stopped target ignition-complete.target. Feb 12 21:51:39.978409 systemd[1]: Stopped target ignition-diskful.target. Feb 12 21:51:39.978683 systemd[1]: Stopped target initrd-root-device.target. Feb 12 21:51:39.978969 systemd[1]: Stopped target remote-fs.target. Feb 12 21:51:39.979236 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 21:51:39.979511 systemd[1]: Stopped target sysinit.target. Feb 12 21:51:39.979786 systemd[1]: Stopped target local-fs.target. Feb 12 21:51:39.980054 systemd[1]: Stopped target local-fs-pre.target. Feb 12 21:51:39.980323 systemd[1]: Stopped target swap.target. Feb 12 21:51:39.980559 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 21:51:39.980799 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 21:51:39.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:39.981173 systemd[1]: Stopped target cryptsetup.target. Feb 12 21:51:39.984391 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 21:51:39.984616 systemd[1]: Stopped dracut-initqueue.service. Feb 12 21:51:39.984840 kernel: audit: type=1131 audit(1707774699.979:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:39.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:39.984892 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 21:51:39.984993 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 21:51:39.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:39.987507 systemd[1]: Stopped target paths.target. Feb 12 21:51:39.987837 kernel: audit: type=1131 audit(1707774699.983:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:39.987752 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 21:51:39.989751 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 21:51:39.989958 systemd[1]: Stopped target slices.target. Feb 12 21:51:39.990157 systemd[1]: Stopped target sockets.target. Feb 12 21:51:39.990351 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 21:51:39.990431 systemd[1]: Closed iscsid.socket. Feb 12 21:51:39.990672 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 21:51:39.990749 systemd[1]: Closed iscsiuio.socket. Feb 12 21:51:39.991021 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 21:51:39.991128 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 21:51:39.991391 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 21:51:39.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:39.991483 systemd[1]: Stopped ignition-files.service. Feb 12 21:51:39.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:39.992450 systemd[1]: Stopping ignition-mount.service... Feb 12 21:51:39.992590 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 21:51:39.993752 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 21:51:39.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:39.994558 systemd[1]: Stopping sysroot-boot.service... Feb 12 21:51:39.995225 systemd[1]: Stopping systemd-networkd.service... Feb 12 21:51:39.995559 systemd[1]: Stopping systemd-resolved.service... Feb 12 21:51:39.995670 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 21:51:39.995838 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 21:51:39.999625 ignition[889]: INFO : Ignition 2.14.0 Feb 12 21:51:40.000331 ignition[889]: INFO : Stage: umount Feb 12 21:51:40.000524 ignition[889]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 21:51:40.000669 ignition[889]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Feb 12 21:51:40.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:40.001978 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 21:51:40.002236 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 21:51:40.002583 ignition[889]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 12 21:51:40.004963 ignition[889]: INFO : umount: umount passed Feb 12 21:51:40.005147 ignition[889]: INFO : Ignition finished successfully Feb 12 21:51:40.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:40.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:40.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:40.007154 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 21:51:40.007207 systemd[1]: Stopped systemd-resolved.service. Feb 12 21:51:40.008047 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 21:51:40.008313 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 21:51:40.008359 systemd[1]: Stopped systemd-networkd.service. Feb 12 21:51:40.008650 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 21:51:40.008690 systemd[1]: Stopped ignition-mount.service. Feb 12 21:51:40.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:40.010288 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 21:51:40.010465 systemd[1]: Stopped sysroot-boot.service. Feb 12 21:51:40.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:40.009000 audit: BPF prog-id=9 op=UNLOAD Feb 12 21:51:40.009000 audit: BPF prog-id=6 op=UNLOAD Feb 12 21:51:40.011529 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 21:51:40.011710 systemd[1]: Finished initrd-cleanup.service. Feb 12 21:51:40.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:40.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:40.012410 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 21:51:40.012562 systemd[1]: Closed systemd-networkd.socket. Feb 12 21:51:40.012833 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 21:51:40.012980 systemd[1]: Stopped ignition-disks.service. Feb 12 21:51:40.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:40.013228 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 21:51:40.013378 systemd[1]: Stopped ignition-kargs.service. Feb 12 21:51:40.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:40.013623 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 21:51:40.013774 systemd[1]: Stopped ignition-setup.service. Feb 12 21:51:40.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:40.014017 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 21:51:40.014164 systemd[1]: Stopped initrd-setup-root.service. Feb 12 21:51:40.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:40.014927 systemd[1]: Stopping network-cleanup.service... Feb 12 21:51:40.015341 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 21:51:40.015501 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 21:51:40.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:40.015777 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Feb 12 21:51:40.015938 systemd[1]: Stopped afterburn-network-kargs.service. Feb 12 21:51:40.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:40.016217 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 21:51:40.016366 systemd[1]: Stopped systemd-sysctl.service. Feb 12 21:51:40.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:40.016649 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 21:51:40.016671 systemd[1]: Stopped systemd-modules-load.service. Feb 12 21:51:40.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:40.019063 systemd[1]: Stopping systemd-udevd.service... Feb 12 21:51:40.020329 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 21:51:40.023769 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 21:51:40.024074 systemd[1]: Stopped systemd-udevd.service. Feb 12 21:51:40.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:40.024782 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 21:51:40.025035 systemd[1]: Stopped network-cleanup.service. Feb 12 21:51:40.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:40.025448 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 21:51:40.025657 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 21:51:40.025948 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 21:51:40.025974 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 21:51:40.026410 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 21:51:40.026445 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 21:51:40.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:40.027025 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 21:51:40.027057 systemd[1]: Stopped dracut-cmdline.service. Feb 12 21:51:40.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:40.027524 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 21:51:40.027557 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 21:51:40.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:40.028641 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 21:51:40.028998 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 21:51:40.029033 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 21:51:40.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:40.031786 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 21:51:40.031851 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 21:51:40.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:40.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:40.032154 systemd[1]: Reached target initrd-switch-root.target. Feb 12 21:51:40.032673 systemd[1]: Starting initrd-switch-root.service... Feb 12 21:51:40.037294 systemd[1]: Switching root. Feb 12 21:51:40.037000 audit: BPF prog-id=5 op=UNLOAD Feb 12 21:51:40.037000 audit: BPF prog-id=4 op=UNLOAD Feb 12 21:51:40.037000 audit: BPF prog-id=3 op=UNLOAD Feb 12 21:51:40.037000 audit: BPF prog-id=8 op=UNLOAD Feb 12 21:51:40.037000 audit: BPF prog-id=7 op=UNLOAD Feb 12 21:51:40.052489 iscsid[739]: iscsid shutting down. Feb 12 21:51:40.052704 systemd-journald[217]: Journal stopped Feb 12 21:51:42.682573 systemd-journald[217]: Received SIGTERM from PID 1 (n/a). Feb 12 21:51:42.682597 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 21:51:42.682606 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 21:51:42.682615 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 21:51:42.682625 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 21:51:42.682948 kernel: SELinux: policy capability open_perms=1 Feb 12 21:51:42.682961 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 21:51:42.682968 kernel: SELinux: policy capability always_check_network=0 Feb 12 21:51:42.682977 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 21:51:42.682986 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 21:51:42.682995 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 21:51:42.683003 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 21:51:42.683013 systemd[1]: Successfully loaded SELinux policy in 40ms. Feb 12 21:51:42.683020 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 5.262ms. Feb 12 21:51:42.683031 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 21:51:42.683042 systemd[1]: Detected virtualization vmware. Feb 12 21:51:42.683055 systemd[1]: Detected architecture x86-64. Feb 12 21:51:42.683063 systemd[1]: Detected first boot. Feb 12 21:51:42.683070 systemd[1]: Initializing machine ID from random generator. Feb 12 21:51:42.683076 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 21:51:42.683085 systemd[1]: Populated /etc with preset unit settings. Feb 12 21:51:42.683102 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 21:51:42.683115 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 21:51:42.683198 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 21:51:42.683212 systemd[1]: Queued start job for default target multi-user.target. Feb 12 21:51:42.683219 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 21:51:42.683229 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 21:51:42.683239 systemd[1]: Created slice system-getty.slice. Feb 12 21:51:42.683246 systemd[1]: Created slice system-modprobe.slice. Feb 12 21:51:42.683252 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 21:51:42.683259 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 21:51:42.683267 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 21:51:42.683274 systemd[1]: Created slice user.slice. Feb 12 21:51:42.683280 systemd[1]: Started systemd-ask-password-console.path. Feb 12 21:51:42.683291 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 21:51:42.683302 systemd[1]: Set up automount boot.automount. Feb 12 21:51:42.683310 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 21:51:42.683316 systemd[1]: Reached target integritysetup.target. Feb 12 21:51:42.683322 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 21:51:42.683331 systemd[1]: Reached target remote-fs.target. Feb 12 21:51:42.683345 systemd[1]: Reached target slices.target. Feb 12 21:51:42.683358 systemd[1]: Reached target swap.target. Feb 12 21:51:42.683365 systemd[1]: Reached target torcx.target. Feb 12 21:51:42.683373 systemd[1]: Reached target veritysetup.target. Feb 12 21:51:42.683379 systemd[1]: Listening on systemd-coredump.socket. Feb 12 21:51:42.683386 systemd[1]: Listening on systemd-initctl.socket. Feb 12 21:51:42.683393 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 21:51:42.683400 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 21:51:42.683410 systemd[1]: Listening on systemd-journald.socket. Feb 12 21:51:42.683420 systemd[1]: Listening on systemd-networkd.socket. Feb 12 21:51:42.683427 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 21:51:42.683434 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 21:51:42.683441 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 21:51:42.683448 systemd[1]: Mounting dev-hugepages.mount... Feb 12 21:51:42.683457 systemd[1]: Mounting dev-mqueue.mount... Feb 12 21:51:42.683827 systemd[1]: Mounting media.mount... Feb 12 21:51:42.684193 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 21:51:42.684206 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 21:51:42.684213 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 21:51:42.684220 systemd[1]: Mounting tmp.mount... Feb 12 21:51:42.684227 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 21:51:42.684238 systemd[1]: Starting ignition-delete-config.service... Feb 12 21:51:42.684248 systemd[1]: Starting kmod-static-nodes.service... Feb 12 21:51:42.684256 systemd[1]: Starting modprobe@configfs.service... Feb 12 21:51:42.684264 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 21:51:42.684270 systemd[1]: Starting modprobe@drm.service... Feb 12 21:51:42.684277 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 21:51:42.684284 systemd[1]: Starting modprobe@fuse.service... Feb 12 21:51:42.684292 systemd[1]: Starting modprobe@loop.service... Feb 12 21:51:42.684308 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 21:51:42.684327 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 12 21:51:42.684339 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 12 21:51:42.684347 systemd[1]: Starting systemd-journald.service... Feb 12 21:51:42.684354 systemd[1]: Starting systemd-modules-load.service... Feb 12 21:51:42.684366 systemd[1]: Starting systemd-network-generator.service... Feb 12 21:51:42.684378 systemd[1]: Starting systemd-remount-fs.service... Feb 12 21:51:42.684391 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 21:51:42.684400 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 21:51:42.684407 systemd[1]: Mounted dev-hugepages.mount. Feb 12 21:51:42.684416 systemd[1]: Mounted dev-mqueue.mount. Feb 12 21:51:42.684426 systemd[1]: Mounted media.mount. Feb 12 21:51:42.684437 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 21:51:42.684446 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 21:51:42.684454 systemd[1]: Mounted tmp.mount. Feb 12 21:51:42.684461 systemd[1]: Finished kmod-static-nodes.service. Feb 12 21:51:42.684467 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 21:51:42.684480 systemd-journald[1034]: Journal started Feb 12 21:51:42.684522 systemd-journald[1034]: Runtime Journal (/run/log/journal/fcaa2cfeb248402f861b859d74070959) is 4.8M, max 38.8M, 34.0M free. Feb 12 21:51:42.602000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 12 21:51:42.678000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 21:51:42.678000 audit[1034]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffe585968c0 a2=4000 a3=7ffe5859695c items=0 ppid=1 pid=1034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 21:51:42.678000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 21:51:42.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:42.699833 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 21:51:42.699874 systemd[1]: Started systemd-journald.service. Feb 12 21:51:42.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:42.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:42.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:42.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:42.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:42.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:42.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:42.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:42.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:42.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:42.688148 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 21:51:42.700507 jq[1020]: true Feb 12 21:51:42.688249 systemd[1]: Finished modprobe@drm.service. Feb 12 21:51:42.688634 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 21:51:42.688725 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 21:51:42.689780 systemd[1]: Finished systemd-modules-load.service. Feb 12 21:51:42.690016 systemd[1]: Finished systemd-network-generator.service. Feb 12 21:51:42.690251 systemd[1]: Finished systemd-remount-fs.service. Feb 12 21:51:42.690487 systemd[1]: Reached target network-pre.target. Feb 12 21:51:42.690582 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 21:51:42.707296 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 21:51:42.708501 systemd[1]: Starting systemd-journal-flush.service... Feb 12 21:51:42.708633 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 21:51:42.709518 systemd[1]: Starting systemd-random-seed.service... Feb 12 21:51:42.710587 systemd[1]: Starting systemd-sysctl.service... Feb 12 21:51:42.711333 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 21:51:42.713825 systemd[1]: Finished modprobe@configfs.service. Feb 12 21:51:42.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:42.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:42.716000 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 21:51:42.726011 kernel: loop: module loaded Feb 12 21:51:42.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:42.726120 systemd-journald[1034]: Time spent on flushing to /var/log/journal/fcaa2cfeb248402f861b859d74070959 is 62.846ms for 1961 entries. Feb 12 21:51:42.726120 systemd-journald[1034]: System Journal (/var/log/journal/fcaa2cfeb248402f861b859d74070959) is 8.0M, max 584.8M, 576.8M free. Feb 12 21:51:42.817760 systemd-journald[1034]: Received client request to flush runtime journal. Feb 12 21:51:42.817816 kernel: fuse: init (API version 7.34) Feb 12 21:51:42.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:42.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:42.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:42.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:42.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:42.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:42.718461 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 21:51:42.723797 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 21:51:42.818198 jq[1069]: true Feb 12 21:51:42.723903 systemd[1]: Finished modprobe@loop.service. Feb 12 21:51:42.724089 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 21:51:42.729145 systemd[1]: Finished systemd-random-seed.service. Feb 12 21:51:42.729330 systemd[1]: Reached target first-boot-complete.target. Feb 12 21:51:42.752659 systemd[1]: Finished systemd-sysctl.service. Feb 12 21:51:42.755644 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 21:51:42.755764 systemd[1]: Finished modprobe@fuse.service. Feb 12 21:51:42.756735 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 21:51:42.759049 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 21:51:42.776641 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 21:51:42.777699 systemd[1]: Starting systemd-sysusers.service... Feb 12 21:51:42.819278 systemd[1]: Finished systemd-journal-flush.service. Feb 12 21:51:42.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:42.833141 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 21:51:42.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:42.834188 systemd[1]: Starting systemd-udev-settle.service... Feb 12 21:51:42.844821 udevadm[1105]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 12 21:51:42.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:42.886283 systemd[1]: Finished systemd-sysusers.service. Feb 12 21:51:42.887414 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 21:51:42.956169 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 21:51:42.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:42.974931 ignition[1076]: Ignition 2.14.0 Feb 12 21:51:42.975165 ignition[1076]: deleting config from guestinfo properties Feb 12 21:51:42.978175 ignition[1076]: Successfully deleted config Feb 12 21:51:42.978796 systemd[1]: Finished ignition-delete-config.service. Feb 12 21:51:42.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ignition-delete-config comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:43.271052 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 21:51:43.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:43.272100 systemd[1]: Starting systemd-udevd.service... Feb 12 21:51:43.283852 systemd-udevd[1113]: Using default interface naming scheme 'v252'. Feb 12 21:51:43.324388 systemd[1]: Started systemd-udevd.service. Feb 12 21:51:43.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:43.325766 systemd[1]: Starting systemd-networkd.service... Feb 12 21:51:43.345588 systemd[1]: Starting systemd-userdbd.service... Feb 12 21:51:43.346698 systemd[1]: Found device dev-ttyS0.device. Feb 12 21:51:43.373990 systemd[1]: Started systemd-userdbd.service. Feb 12 21:51:43.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:43.388730 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 12 21:51:43.393732 kernel: ACPI: button: Power Button [PWRF] Feb 12 21:51:43.435703 systemd-networkd[1114]: lo: Link UP Feb 12 21:51:43.435708 systemd-networkd[1114]: lo: Gained carrier Feb 12 21:51:43.436534 systemd-networkd[1114]: Enumeration completed Feb 12 21:51:43.436601 systemd-networkd[1114]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Feb 12 21:51:43.436606 systemd[1]: Started systemd-networkd.service. Feb 12 21:51:43.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:43.439943 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Feb 12 21:51:43.440071 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Feb 12 21:51:43.441119 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): ens192: link becomes ready Feb 12 21:51:43.441412 systemd-networkd[1114]: ens192: Link UP Feb 12 21:51:43.441538 systemd-networkd[1114]: ens192: Gained carrier Feb 12 21:51:43.478000 audit[1121]: AVC avc: denied { confidentiality } for pid=1121 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 21:51:43.478000 audit[1121]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55616710d200 a1=32194 a2=7effc43e9bc5 a3=5 items=108 ppid=1113 pid=1121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 21:51:43.478000 audit: CWD cwd="/" Feb 12 21:51:43.478000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=1 name=(null) inode=23283 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=2 name=(null) inode=23283 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=3 name=(null) inode=23284 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=4 name=(null) inode=23283 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=5 name=(null) inode=23285 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=6 name=(null) inode=23283 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=7 name=(null) inode=23286 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=8 name=(null) inode=23286 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=9 name=(null) inode=23287 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=10 name=(null) inode=23286 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=11 name=(null) inode=23288 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=12 name=(null) inode=23286 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=13 name=(null) inode=23289 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=14 name=(null) inode=23286 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=15 name=(null) inode=23290 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=16 name=(null) inode=23286 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=17 name=(null) inode=23291 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=18 name=(null) inode=23283 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=19 name=(null) inode=23292 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=20 name=(null) inode=23292 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=21 name=(null) inode=23293 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=22 name=(null) inode=23292 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=23 name=(null) inode=23294 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=24 name=(null) inode=23292 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=25 name=(null) inode=23295 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=26 name=(null) inode=23292 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=27 name=(null) inode=23296 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=28 name=(null) inode=23292 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=29 name=(null) inode=23297 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=30 name=(null) inode=23283 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=31 name=(null) inode=23298 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=32 name=(null) inode=23298 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=33 name=(null) inode=23299 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=34 name=(null) inode=23298 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=35 name=(null) inode=23300 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=36 name=(null) inode=23298 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=37 name=(null) inode=23301 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=38 name=(null) inode=23298 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=39 name=(null) inode=23302 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=40 name=(null) inode=23298 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=41 name=(null) inode=23303 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=42 name=(null) inode=23283 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=43 name=(null) inode=23304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=44 name=(null) inode=23304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=45 name=(null) inode=23305 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=46 name=(null) inode=23304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=47 name=(null) inode=23306 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=48 name=(null) inode=23304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=49 name=(null) inode=23307 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=50 name=(null) inode=23304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=51 name=(null) inode=23308 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=52 name=(null) inode=23304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=53 name=(null) inode=23309 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=55 name=(null) inode=23310 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=56 name=(null) inode=23310 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=57 name=(null) inode=23311 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=58 name=(null) inode=23310 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=59 name=(null) inode=23312 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=60 name=(null) inode=23310 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=61 name=(null) inode=23313 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=62 name=(null) inode=23313 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=63 name=(null) inode=23314 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=64 name=(null) inode=23313 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=65 name=(null) inode=23315 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=66 name=(null) inode=23313 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=67 name=(null) inode=23316 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=68 name=(null) inode=23313 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=69 name=(null) inode=23317 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=70 name=(null) inode=23313 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=71 name=(null) inode=23318 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=72 name=(null) inode=23310 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=73 name=(null) inode=23319 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=74 name=(null) inode=23319 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=75 name=(null) inode=23320 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=76 name=(null) inode=23319 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=77 name=(null) inode=23321 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=78 name=(null) inode=23319 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=79 name=(null) inode=23322 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=80 name=(null) inode=23319 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=81 name=(null) inode=23323 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=82 name=(null) inode=23319 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=83 name=(null) inode=23324 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=84 name=(null) inode=23310 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=85 name=(null) inode=23325 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=86 name=(null) inode=23325 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=87 name=(null) inode=23326 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=88 name=(null) inode=23325 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=89 name=(null) inode=23327 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=90 name=(null) inode=23325 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=91 name=(null) inode=23328 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=92 name=(null) inode=23325 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=93 name=(null) inode=23329 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=94 name=(null) inode=23325 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=95 name=(null) inode=23330 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=96 name=(null) inode=23310 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=97 name=(null) inode=23331 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=98 name=(null) inode=23331 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=99 name=(null) inode=23332 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=100 name=(null) inode=23331 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=101 name=(null) inode=23333 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=102 name=(null) inode=23331 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=103 name=(null) inode=23334 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=104 name=(null) inode=23331 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=105 name=(null) inode=23335 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=106 name=(null) inode=23331 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PATH item=107 name=(null) inode=23336 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:51:43.478000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 21:51:43.490384 kernel: vmw_vmci 0000:00:07.7: Found VMCI PCI device at 0x11080, irq 16 Feb 12 21:51:43.490534 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Feb 12 21:51:43.490615 kernel: Guest personality initialized and is active Feb 12 21:51:43.490629 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Feb 12 21:51:43.491339 kernel: Initialized host personality Feb 12 21:51:43.505739 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Feb 12 21:51:43.512727 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Feb 12 21:51:43.514577 (udev-worker)[1129]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Feb 12 21:51:43.522732 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 21:51:43.536732 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1123) Feb 12 21:51:43.543656 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Feb 12 21:51:43.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:43.589027 systemd[1]: Finished systemd-udev-settle.service. Feb 12 21:51:43.590304 systemd[1]: Starting lvm2-activation-early.service... Feb 12 21:51:43.716476 lvm[1147]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 21:51:43.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:43.754455 systemd[1]: Finished lvm2-activation-early.service. Feb 12 21:51:43.754674 systemd[1]: Reached target cryptsetup.target. Feb 12 21:51:43.755994 systemd[1]: Starting lvm2-activation.service... Feb 12 21:51:43.759636 lvm[1149]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 21:51:43.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:43.793402 systemd[1]: Finished lvm2-activation.service. Feb 12 21:51:43.793605 systemd[1]: Reached target local-fs-pre.target. Feb 12 21:51:43.793743 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 21:51:43.793761 systemd[1]: Reached target local-fs.target. Feb 12 21:51:43.793877 systemd[1]: Reached target machines.target. Feb 12 21:51:43.795105 systemd[1]: Starting ldconfig.service... Feb 12 21:51:43.805834 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 21:51:43.805870 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 21:51:43.807049 systemd[1]: Starting systemd-boot-update.service... Feb 12 21:51:43.808079 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 21:51:43.809292 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 21:51:43.809506 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 21:51:43.809541 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 21:51:43.810565 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 21:51:43.824602 systemd-tmpfiles[1155]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 21:51:43.831787 systemd-tmpfiles[1155]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 21:51:43.837613 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1152 (bootctl) Feb 12 21:51:43.838331 systemd-tmpfiles[1155]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 21:51:43.838407 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 21:51:43.847674 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 21:51:43.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:44.831720 systemd-fsck[1161]: fsck.fat 4.2 (2021-01-31) Feb 12 21:51:44.831720 systemd-fsck[1161]: /dev/sda1: 789 files, 115339/258078 clusters Feb 12 21:51:44.832897 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 21:51:44.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:44.833986 systemd[1]: Mounting boot.mount... Feb 12 21:51:44.870850 systemd-networkd[1114]: ens192: Gained IPv6LL Feb 12 21:51:44.918030 systemd[1]: Mounted boot.mount. Feb 12 21:51:44.992024 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 21:51:44.992527 systemd[1]: Finished systemd-boot-update.service. Feb 12 21:51:44.996261 kernel: kauditd_printk_skb: 197 callbacks suppressed Feb 12 21:51:44.996319 kernel: audit: type=1130 audit(1707774704.990:122): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:44.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:44.992904 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 21:51:44.999457 kernel: audit: type=1130 audit(1707774704.994:123): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:44.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:45.056156 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 21:51:45.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:45.057243 systemd[1]: Starting audit-rules.service... Feb 12 21:51:45.059737 kernel: audit: type=1130 audit(1707774705.054:124): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:45.060138 systemd[1]: Starting clean-ca-certificates.service... Feb 12 21:51:45.061133 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 21:51:45.063851 systemd[1]: Starting systemd-resolved.service... Feb 12 21:51:45.067199 systemd[1]: Starting systemd-timesyncd.service... Feb 12 21:51:45.069288 systemd[1]: Starting systemd-update-utmp.service... Feb 12 21:51:45.070292 systemd[1]: Finished clean-ca-certificates.service. Feb 12 21:51:45.081421 kernel: audit: type=1130 audit(1707774705.071:125): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:45.081523 kernel: audit: type=1127 audit(1707774705.072:126): pid=1175 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 21:51:45.081546 kernel: audit: type=1130 audit(1707774705.076:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:45.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:45.072000 audit[1175]: SYSTEM_BOOT pid=1175 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 21:51:45.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:45.075415 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 21:51:45.076345 systemd[1]: Finished systemd-update-utmp.service. Feb 12 21:51:45.110496 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 21:51:45.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:45.113733 kernel: audit: type=1130 audit(1707774705.108:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:45.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:45.167187 systemd-resolved[1172]: Positive Trust Anchors: Feb 12 21:51:45.167252 systemd[1]: Started systemd-timesyncd.service. Feb 12 21:51:45.167455 systemd[1]: Reached target time-set.target. Feb 12 21:51:45.167768 systemd-resolved[1172]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 21:51:45.167838 systemd-resolved[1172]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 21:51:45.170769 kernel: audit: type=1130 audit(1707774705.165:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:51:45.179353 augenrules[1193]: No rules Feb 12 21:51:45.177000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 21:51:45.179865 systemd[1]: Finished audit-rules.service. Feb 12 21:51:45.184617 kernel: audit: type=1305 audit(1707774705.177:130): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 21:51:45.184647 kernel: audit: type=1300 audit(1707774705.177:130): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffeac071fb0 a2=420 a3=0 items=0 ppid=1169 pid=1193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 21:51:45.177000 audit[1193]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffeac071fb0 a2=420 a3=0 items=0 ppid=1169 pid=1193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 21:51:45.177000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 21:51:45.275423 systemd-resolved[1172]: Defaulting to hostname 'linux'. Feb 12 21:51:45.277078 systemd[1]: Started systemd-resolved.service. Feb 12 21:51:45.277322 systemd[1]: Reached target network.target. Feb 12 21:51:45.277478 systemd[1]: Reached target nss-lookup.target. Feb 12 21:52:29.731859 systemd-resolved[1172]: Clock change detected. Flushing caches. Feb 12 21:52:29.731866 systemd-timesyncd[1174]: Contacted time server 135.148.100.14:123 (0.flatcar.pool.ntp.org). Feb 12 21:52:29.732233 systemd-timesyncd[1174]: Initial clock synchronization to Mon 2024-02-12 21:52:29.731766 UTC. Feb 12 21:52:29.929024 ldconfig[1151]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 21:52:29.943584 systemd[1]: Finished ldconfig.service. Feb 12 21:52:29.944773 systemd[1]: Starting systemd-update-done.service... Feb 12 21:52:29.948878 systemd[1]: Finished systemd-update-done.service. Feb 12 21:52:29.949053 systemd[1]: Reached target sysinit.target. Feb 12 21:52:29.949192 systemd[1]: Started motdgen.path. Feb 12 21:52:29.949291 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 21:52:29.949491 systemd[1]: Started logrotate.timer. Feb 12 21:52:29.949648 systemd[1]: Started mdadm.timer. Feb 12 21:52:29.949731 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 21:52:29.949828 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 21:52:29.949848 systemd[1]: Reached target paths.target. Feb 12 21:52:29.949939 systemd[1]: Reached target timers.target. Feb 12 21:52:29.950190 systemd[1]: Listening on dbus.socket. Feb 12 21:52:29.951282 systemd[1]: Starting docker.socket... Feb 12 21:52:29.952541 systemd[1]: Listening on sshd.socket. Feb 12 21:52:29.952936 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 21:52:29.953204 systemd[1]: Listening on docker.socket. Feb 12 21:52:29.953340 systemd[1]: Reached target sockets.target. Feb 12 21:52:29.953432 systemd[1]: Reached target basic.target. Feb 12 21:52:29.953619 systemd[1]: System is tainted: cgroupsv1 Feb 12 21:52:29.953643 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 21:52:29.953657 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 21:52:29.954555 systemd[1]: Starting containerd.service... Feb 12 21:52:29.955815 systemd[1]: Starting dbus.service... Feb 12 21:52:29.956822 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 21:52:29.957988 systemd[1]: Starting extend-filesystems.service... Feb 12 21:52:29.958132 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 21:52:29.959180 systemd[1]: Starting motdgen.service... Feb 12 21:52:29.960343 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 21:52:29.961310 systemd[1]: Starting prepare-critools.service... Feb 12 21:52:29.967239 jq[1208]: false Feb 12 21:52:29.964317 systemd[1]: Starting prepare-helm.service... Feb 12 21:52:29.967953 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 21:52:29.969725 systemd[1]: Starting sshd-keygen.service... Feb 12 21:52:29.972801 systemd[1]: Starting systemd-logind.service... Feb 12 21:52:29.972923 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 21:52:29.972956 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 21:52:29.988912 jq[1226]: true Feb 12 21:52:29.973788 systemd[1]: Starting update-engine.service... Feb 12 21:52:29.974893 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 21:52:29.976188 systemd[1]: Starting vmtoolsd.service... Feb 12 21:52:29.982326 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 21:52:29.982451 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 21:52:29.990565 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 21:52:29.990708 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 21:52:29.998027 jq[1233]: true Feb 12 21:52:30.005214 systemd[1]: Started vmtoolsd.service. Feb 12 21:52:30.017617 tar[1230]: crictl Feb 12 21:52:30.018242 tar[1231]: linux-amd64/helm Feb 12 21:52:30.022247 tar[1229]: ./ Feb 12 21:52:30.022247 tar[1229]: ./macvlan Feb 12 21:52:30.064654 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 21:52:30.066788 systemd[1]: Finished motdgen.service. Feb 12 21:52:30.067276 extend-filesystems[1209]: Found sda Feb 12 21:52:30.067836 extend-filesystems[1209]: Found sda1 Feb 12 21:52:30.067836 extend-filesystems[1209]: Found sda2 Feb 12 21:52:30.067836 extend-filesystems[1209]: Found sda3 Feb 12 21:52:30.067836 extend-filesystems[1209]: Found usr Feb 12 21:52:30.067836 extend-filesystems[1209]: Found sda4 Feb 12 21:52:30.067836 extend-filesystems[1209]: Found sda6 Feb 12 21:52:30.067836 extend-filesystems[1209]: Found sda7 Feb 12 21:52:30.067836 extend-filesystems[1209]: Found sda9 Feb 12 21:52:30.067836 extend-filesystems[1209]: Checking size of /dev/sda9 Feb 12 21:52:30.096619 kernel: NET: Registered PF_VSOCK protocol family Feb 12 21:52:30.105282 env[1242]: time="2024-02-12T21:52:30.105245930Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 21:52:30.124139 extend-filesystems[1209]: Old size kept for /dev/sda9 Feb 12 21:52:30.142479 extend-filesystems[1209]: Found sr0 Feb 12 21:52:30.132213 dbus-daemon[1206]: [system] SELinux support is enabled Feb 12 21:52:30.125229 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 21:52:30.125376 systemd[1]: Finished extend-filesystems.service. Feb 12 21:52:30.132342 systemd[1]: Started dbus.service. Feb 12 21:52:30.134000 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 21:52:30.134045 systemd[1]: Reached target system-config.target. Feb 12 21:52:30.134195 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 21:52:30.134204 systemd[1]: Reached target user-config.target. Feb 12 21:52:30.145775 update_engine[1225]: I0212 21:52:30.144349 1225 main.cc:92] Flatcar Update Engine starting Feb 12 21:52:30.147383 bash[1271]: Updated "/home/core/.ssh/authorized_keys" Feb 12 21:52:30.147081 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 21:52:30.147743 systemd[1]: Started update-engine.service. Feb 12 21:52:30.151978 update_engine[1225]: I0212 21:52:30.150874 1225 update_check_scheduler.cc:74] Next update check in 6m22s Feb 12 21:52:30.149472 systemd[1]: Started locksmithd.service. Feb 12 21:52:30.205725 systemd-logind[1224]: Watching system buttons on /dev/input/event1 (Power Button) Feb 12 21:52:30.205742 systemd-logind[1224]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 12 21:52:30.205867 systemd-logind[1224]: New seat seat0. Feb 12 21:52:30.209780 systemd[1]: Started systemd-logind.service. Feb 12 21:52:30.210441 tar[1229]: ./static Feb 12 21:52:30.237086 env[1242]: time="2024-02-12T21:52:30.237054926Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 21:52:30.237253 env[1242]: time="2024-02-12T21:52:30.237242150Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 21:52:30.241782 env[1242]: time="2024-02-12T21:52:30.241729926Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 21:52:30.241860 env[1242]: time="2024-02-12T21:52:30.241849603Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 21:52:30.242057 env[1242]: time="2024-02-12T21:52:30.242045226Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 21:52:30.242109 env[1242]: time="2024-02-12T21:52:30.242098930Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 21:52:30.242156 env[1242]: time="2024-02-12T21:52:30.242145151Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 21:52:30.242208 env[1242]: time="2024-02-12T21:52:30.242196077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 21:52:30.242297 env[1242]: time="2024-02-12T21:52:30.242287650Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 21:52:30.242476 env[1242]: time="2024-02-12T21:52:30.242466513Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 21:52:30.242647 env[1242]: time="2024-02-12T21:52:30.242632471Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 21:52:30.242718 env[1242]: time="2024-02-12T21:52:30.242708034Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 21:52:30.242789 env[1242]: time="2024-02-12T21:52:30.242778834Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 21:52:30.242838 env[1242]: time="2024-02-12T21:52:30.242827061Z" level=info msg="metadata content store policy set" policy=shared Feb 12 21:52:30.283386 env[1242]: time="2024-02-12T21:52:30.282018197Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 21:52:30.283386 env[1242]: time="2024-02-12T21:52:30.282048557Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 21:52:30.283386 env[1242]: time="2024-02-12T21:52:30.282056655Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 21:52:30.283386 env[1242]: time="2024-02-12T21:52:30.282077945Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 21:52:30.283386 env[1242]: time="2024-02-12T21:52:30.282087274Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 21:52:30.283386 env[1242]: time="2024-02-12T21:52:30.282095743Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 21:52:30.283386 env[1242]: time="2024-02-12T21:52:30.282102972Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 21:52:30.283386 env[1242]: time="2024-02-12T21:52:30.282110435Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 21:52:30.283386 env[1242]: time="2024-02-12T21:52:30.282117623Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 21:52:30.283386 env[1242]: time="2024-02-12T21:52:30.282142271Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 21:52:30.283386 env[1242]: time="2024-02-12T21:52:30.282150079Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 21:52:30.283386 env[1242]: time="2024-02-12T21:52:30.282157946Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 21:52:30.283386 env[1242]: time="2024-02-12T21:52:30.282228149Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 21:52:30.283386 env[1242]: time="2024-02-12T21:52:30.282274170Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 21:52:30.283991 env[1242]: time="2024-02-12T21:52:30.282486312Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 21:52:30.283991 env[1242]: time="2024-02-12T21:52:30.282504697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 21:52:30.283991 env[1242]: time="2024-02-12T21:52:30.282512734Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 21:52:30.283991 env[1242]: time="2024-02-12T21:52:30.282544260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 21:52:30.283991 env[1242]: time="2024-02-12T21:52:30.282552573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 21:52:30.283991 env[1242]: time="2024-02-12T21:52:30.282559695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 21:52:30.283991 env[1242]: time="2024-02-12T21:52:30.282566518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 21:52:30.283991 env[1242]: time="2024-02-12T21:52:30.282573581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 21:52:30.283991 env[1242]: time="2024-02-12T21:52:30.282581024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 21:52:30.283991 env[1242]: time="2024-02-12T21:52:30.282587211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 21:52:30.283991 env[1242]: time="2024-02-12T21:52:30.282593124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 21:52:30.283991 env[1242]: time="2024-02-12T21:52:30.282609625Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 21:52:30.283991 env[1242]: time="2024-02-12T21:52:30.282678556Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 21:52:30.283991 env[1242]: time="2024-02-12T21:52:30.282687607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 21:52:30.283991 env[1242]: time="2024-02-12T21:52:30.282695234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 21:52:30.283609 systemd[1]: Started containerd.service. Feb 12 21:52:30.284245 env[1242]: time="2024-02-12T21:52:30.282701851Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 21:52:30.284245 env[1242]: time="2024-02-12T21:52:30.282710294Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 21:52:30.284245 env[1242]: time="2024-02-12T21:52:30.282717048Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 21:52:30.284245 env[1242]: time="2024-02-12T21:52:30.282728205Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 21:52:30.284245 env[1242]: time="2024-02-12T21:52:30.282750682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 21:52:30.284322 env[1242]: time="2024-02-12T21:52:30.282866714Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 21:52:30.284322 env[1242]: time="2024-02-12T21:52:30.282898682Z" level=info msg="Connect containerd service" Feb 12 21:52:30.284322 env[1242]: time="2024-02-12T21:52:30.282915533Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 21:52:30.284322 env[1242]: time="2024-02-12T21:52:30.283234751Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 21:52:30.284322 env[1242]: time="2024-02-12T21:52:30.283331603Z" level=info msg="Start subscribing containerd event" Feb 12 21:52:30.284322 env[1242]: time="2024-02-12T21:52:30.283360615Z" level=info msg="Start recovering state" Feb 12 21:52:30.284322 env[1242]: time="2024-02-12T21:52:30.283410402Z" level=info msg="Start event monitor" Feb 12 21:52:30.284322 env[1242]: time="2024-02-12T21:52:30.283422386Z" level=info msg="Start snapshots syncer" Feb 12 21:52:30.284322 env[1242]: time="2024-02-12T21:52:30.283428009Z" level=info msg="Start cni network conf syncer for default" Feb 12 21:52:30.284322 env[1242]: time="2024-02-12T21:52:30.283431999Z" level=info msg="Start streaming server" Feb 12 21:52:30.284322 env[1242]: time="2024-02-12T21:52:30.283363592Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 21:52:30.284322 env[1242]: time="2024-02-12T21:52:30.283506967Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 21:52:30.284322 env[1242]: time="2024-02-12T21:52:30.283536099Z" level=info msg="containerd successfully booted in 0.189992s" Feb 12 21:52:30.300153 tar[1229]: ./vlan Feb 12 21:52:30.360222 tar[1229]: ./portmap Feb 12 21:52:30.405709 tar[1229]: ./host-local Feb 12 21:52:30.447035 tar[1229]: ./vrf Feb 12 21:52:30.491173 tar[1229]: ./bridge Feb 12 21:52:30.541944 tar[1229]: ./tuning Feb 12 21:52:30.584771 tar[1229]: ./firewall Feb 12 21:52:30.645329 tar[1229]: ./host-device Feb 12 21:52:30.695340 tar[1229]: ./sbr Feb 12 21:52:30.735495 tar[1229]: ./loopback Feb 12 21:52:30.737977 systemd[1]: Finished prepare-critools.service. Feb 12 21:52:30.768667 tar[1229]: ./dhcp Feb 12 21:52:30.808305 tar[1231]: linux-amd64/LICENSE Feb 12 21:52:30.808377 tar[1231]: linux-amd64/README.md Feb 12 21:52:30.817077 systemd[1]: Finished prepare-helm.service. Feb 12 21:52:30.837925 tar[1229]: ./ptp Feb 12 21:52:30.862279 tar[1229]: ./ipvlan Feb 12 21:52:30.885367 tar[1229]: ./bandwidth Feb 12 21:52:30.930908 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 21:52:31.143528 locksmithd[1286]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 21:52:31.266482 sshd_keygen[1246]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 21:52:31.278954 systemd[1]: Finished sshd-keygen.service. Feb 12 21:52:31.280260 systemd[1]: Starting issuegen.service... Feb 12 21:52:31.284184 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 21:52:31.284331 systemd[1]: Finished issuegen.service. Feb 12 21:52:31.285644 systemd[1]: Starting systemd-user-sessions.service... Feb 12 21:52:31.294237 systemd[1]: Finished systemd-user-sessions.service. Feb 12 21:52:31.295259 systemd[1]: Started getty@tty1.service. Feb 12 21:52:31.296144 systemd[1]: Started serial-getty@ttyS0.service. Feb 12 21:52:31.296357 systemd[1]: Reached target getty.target. Feb 12 21:52:31.296493 systemd[1]: Reached target multi-user.target. Feb 12 21:52:31.297590 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 21:52:31.302357 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 21:52:31.302490 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 21:52:31.304821 systemd[1]: Startup finished in 7.481s (kernel) + 6.878s (userspace) = 14.360s. Feb 12 21:52:31.338283 login[1366]: pam_lastlog(login:session): file /var/log/lastlog is locked/read Feb 12 21:52:31.339716 login[1367]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 12 21:52:31.347355 systemd[1]: Created slice user-500.slice. Feb 12 21:52:31.348195 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 21:52:31.351456 systemd-logind[1224]: New session 1 of user core. Feb 12 21:52:31.354941 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 21:52:31.355892 systemd[1]: Starting user@500.service... Feb 12 21:52:31.359050 (systemd)[1373]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:52:31.428431 systemd[1373]: Queued start job for default target default.target. Feb 12 21:52:31.428802 systemd[1373]: Reached target paths.target. Feb 12 21:52:31.428822 systemd[1373]: Reached target sockets.target. Feb 12 21:52:31.428831 systemd[1373]: Reached target timers.target. Feb 12 21:52:31.428838 systemd[1373]: Reached target basic.target. Feb 12 21:52:31.428862 systemd[1373]: Reached target default.target. Feb 12 21:52:31.428877 systemd[1373]: Startup finished in 66ms. Feb 12 21:52:31.428943 systemd[1]: Started user@500.service. Feb 12 21:52:31.429642 systemd[1]: Started session-1.scope. Feb 12 21:52:32.339154 login[1366]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 12 21:52:32.344052 systemd[1]: Started session-2.scope. Feb 12 21:52:32.344259 systemd-logind[1224]: New session 2 of user core. Feb 12 21:53:10.177101 systemd[1]: Created slice system-sshd.slice. Feb 12 21:53:10.178027 systemd[1]: Started sshd@0-139.178.70.99:22-139.178.89.65:50672.service. Feb 12 21:53:10.298382 sshd[1395]: Accepted publickey for core from 139.178.89.65 port 50672 ssh2: RSA SHA256:HiqmCZ5wMmSvO0wWrhK3vjnBlpa7aAHv9/SVtM7jhV0 Feb 12 21:53:10.299200 sshd[1395]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:53:10.302221 systemd[1]: Started session-3.scope. Feb 12 21:53:10.302419 systemd-logind[1224]: New session 3 of user core. Feb 12 21:53:10.349134 systemd[1]: Started sshd@1-139.178.70.99:22-139.178.89.65:50688.service. Feb 12 21:53:10.383516 sshd[1400]: Accepted publickey for core from 139.178.89.65 port 50688 ssh2: RSA SHA256:HiqmCZ5wMmSvO0wWrhK3vjnBlpa7aAHv9/SVtM7jhV0 Feb 12 21:53:10.384348 sshd[1400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:53:10.387063 systemd[1]: Started session-4.scope. Feb 12 21:53:10.387693 systemd-logind[1224]: New session 4 of user core. Feb 12 21:53:10.440938 sshd[1400]: pam_unix(sshd:session): session closed for user core Feb 12 21:53:10.442390 systemd[1]: Started sshd@2-139.178.70.99:22-139.178.89.65:50694.service. Feb 12 21:53:10.444266 systemd-logind[1224]: Session 4 logged out. Waiting for processes to exit. Feb 12 21:53:10.444327 systemd[1]: sshd@1-139.178.70.99:22-139.178.89.65:50688.service: Deactivated successfully. Feb 12 21:53:10.444849 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 21:53:10.445159 systemd-logind[1224]: Removed session 4. Feb 12 21:53:10.473546 sshd[1405]: Accepted publickey for core from 139.178.89.65 port 50694 ssh2: RSA SHA256:HiqmCZ5wMmSvO0wWrhK3vjnBlpa7aAHv9/SVtM7jhV0 Feb 12 21:53:10.474278 sshd[1405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:53:10.476972 systemd-logind[1224]: New session 5 of user core. Feb 12 21:53:10.477151 systemd[1]: Started session-5.scope. Feb 12 21:53:10.525240 sshd[1405]: pam_unix(sshd:session): session closed for user core Feb 12 21:53:10.527173 systemd[1]: Started sshd@3-139.178.70.99:22-139.178.89.65:50706.service. Feb 12 21:53:10.527426 systemd[1]: sshd@2-139.178.70.99:22-139.178.89.65:50694.service: Deactivated successfully. Feb 12 21:53:10.527922 systemd-logind[1224]: Session 5 logged out. Waiting for processes to exit. Feb 12 21:53:10.527948 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 21:53:10.529904 systemd-logind[1224]: Removed session 5. Feb 12 21:53:10.556941 sshd[1413]: Accepted publickey for core from 139.178.89.65 port 50706 ssh2: RSA SHA256:HiqmCZ5wMmSvO0wWrhK3vjnBlpa7aAHv9/SVtM7jhV0 Feb 12 21:53:10.557870 sshd[1413]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:53:10.560239 systemd-logind[1224]: New session 6 of user core. Feb 12 21:53:10.560512 systemd[1]: Started session-6.scope. Feb 12 21:53:10.613318 sshd[1413]: pam_unix(sshd:session): session closed for user core Feb 12 21:53:10.613670 systemd[1]: Started sshd@4-139.178.70.99:22-139.178.89.65:50712.service. Feb 12 21:53:10.617817 systemd[1]: sshd@3-139.178.70.99:22-139.178.89.65:50706.service: Deactivated successfully. Feb 12 21:53:10.619750 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 21:53:10.620140 systemd-logind[1224]: Session 6 logged out. Waiting for processes to exit. Feb 12 21:53:10.621139 systemd-logind[1224]: Removed session 6. Feb 12 21:53:10.645883 sshd[1419]: Accepted publickey for core from 139.178.89.65 port 50712 ssh2: RSA SHA256:HiqmCZ5wMmSvO0wWrhK3vjnBlpa7aAHv9/SVtM7jhV0 Feb 12 21:53:10.647132 sshd[1419]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:53:10.650452 systemd[1]: Started session-7.scope. Feb 12 21:53:10.651279 systemd-logind[1224]: New session 7 of user core. Feb 12 21:53:10.728063 sudo[1425]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 21:53:10.728181 sudo[1425]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 21:53:11.285004 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 21:53:11.289115 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 21:53:11.289336 systemd[1]: Reached target network-online.target. Feb 12 21:53:11.291165 systemd[1]: Starting docker.service... Feb 12 21:53:11.313642 env[1442]: time="2024-02-12T21:53:11.313612525Z" level=info msg="Starting up" Feb 12 21:53:11.314750 env[1442]: time="2024-02-12T21:53:11.314738378Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 21:53:11.314812 env[1442]: time="2024-02-12T21:53:11.314802917Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 21:53:11.314873 env[1442]: time="2024-02-12T21:53:11.314857019Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 21:53:11.314920 env[1442]: time="2024-02-12T21:53:11.314911333Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 21:53:11.315972 env[1442]: time="2024-02-12T21:53:11.315951656Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 21:53:11.315972 env[1442]: time="2024-02-12T21:53:11.315962389Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 21:53:11.315972 env[1442]: time="2024-02-12T21:53:11.315970510Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 21:53:11.316051 env[1442]: time="2024-02-12T21:53:11.315976058Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 21:53:11.320185 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3340974350-merged.mount: Deactivated successfully. Feb 12 21:53:11.377188 env[1442]: time="2024-02-12T21:53:11.377160516Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 12 21:53:11.377188 env[1442]: time="2024-02-12T21:53:11.377180481Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 12 21:53:11.377318 env[1442]: time="2024-02-12T21:53:11.377261276Z" level=info msg="Loading containers: start." Feb 12 21:53:11.474621 kernel: Initializing XFRM netlink socket Feb 12 21:53:11.525191 env[1442]: time="2024-02-12T21:53:11.525166501Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 12 21:53:11.565270 systemd-networkd[1114]: docker0: Link UP Feb 12 21:53:11.570486 env[1442]: time="2024-02-12T21:53:11.570467192Z" level=info msg="Loading containers: done." Feb 12 21:53:11.576786 env[1442]: time="2024-02-12T21:53:11.576756167Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 12 21:53:11.576914 env[1442]: time="2024-02-12T21:53:11.576896027Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 12 21:53:11.576986 env[1442]: time="2024-02-12T21:53:11.576972129Z" level=info msg="Daemon has completed initialization" Feb 12 21:53:11.595748 systemd[1]: Started docker.service. Feb 12 21:53:11.599799 env[1442]: time="2024-02-12T21:53:11.599763003Z" level=info msg="API listen on /run/docker.sock" Feb 12 21:53:11.611488 systemd[1]: Reloading. Feb 12 21:53:11.662731 /usr/lib/systemd/system-generators/torcx-generator[1579]: time="2024-02-12T21:53:11Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 21:53:11.663047 /usr/lib/systemd/system-generators/torcx-generator[1579]: time="2024-02-12T21:53:11Z" level=info msg="torcx already run" Feb 12 21:53:11.721282 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 21:53:11.721294 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 21:53:11.732892 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 21:53:11.781496 systemd[1]: Started kubelet.service. Feb 12 21:53:11.832841 kubelet[1645]: E0212 21:53:11.832768 1645 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 21:53:11.834365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 21:53:11.834460 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 21:53:12.603557 env[1242]: time="2024-02-12T21:53:12.603524205Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 12 21:53:13.189100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1782139622.mount: Deactivated successfully. Feb 12 21:53:14.739225 env[1242]: time="2024-02-12T21:53:14.739186446Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:14.745091 env[1242]: time="2024-02-12T21:53:14.745071314Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:14.752937 env[1242]: time="2024-02-12T21:53:14.752914505Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:14.757478 env[1242]: time="2024-02-12T21:53:14.757460378Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:14.758005 env[1242]: time="2024-02-12T21:53:14.757980912Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 12 21:53:14.765061 env[1242]: time="2024-02-12T21:53:14.765040090Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 12 21:53:14.949035 update_engine[1225]: I0212 21:53:14.949008 1225 update_attempter.cc:509] Updating boot flags... Feb 12 21:53:16.583502 env[1242]: time="2024-02-12T21:53:16.583424431Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:16.594552 env[1242]: time="2024-02-12T21:53:16.594527919Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:16.597638 env[1242]: time="2024-02-12T21:53:16.597617512Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:16.606199 env[1242]: time="2024-02-12T21:53:16.606178863Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:16.606758 env[1242]: time="2024-02-12T21:53:16.606727610Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 12 21:53:16.614337 env[1242]: time="2024-02-12T21:53:16.614300869Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 12 21:53:17.678969 env[1242]: time="2024-02-12T21:53:17.678935497Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:17.692384 env[1242]: time="2024-02-12T21:53:17.692362802Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:17.699817 env[1242]: time="2024-02-12T21:53:17.699799270Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:17.707621 env[1242]: time="2024-02-12T21:53:17.707588363Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:17.708090 env[1242]: time="2024-02-12T21:53:17.708066128Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 12 21:53:17.714141 env[1242]: time="2024-02-12T21:53:17.714111940Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 12 21:53:18.649183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount7392126.mount: Deactivated successfully. Feb 12 21:53:19.025236 env[1242]: time="2024-02-12T21:53:19.025207556Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:19.107647 env[1242]: time="2024-02-12T21:53:19.107617129Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:19.116508 env[1242]: time="2024-02-12T21:53:19.116491378Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:19.121085 env[1242]: time="2024-02-12T21:53:19.121071232Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:19.121305 env[1242]: time="2024-02-12T21:53:19.121291597Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 12 21:53:19.126976 env[1242]: time="2024-02-12T21:53:19.126958172Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 12 21:53:19.580084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount844214812.mount: Deactivated successfully. Feb 12 21:53:19.582157 env[1242]: time="2024-02-12T21:53:19.582131523Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:19.583139 env[1242]: time="2024-02-12T21:53:19.583127560Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:19.584059 env[1242]: time="2024-02-12T21:53:19.584047743Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:19.585010 env[1242]: time="2024-02-12T21:53:19.584998380Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:19.585314 env[1242]: time="2024-02-12T21:53:19.585299828Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 12 21:53:19.591557 env[1242]: time="2024-02-12T21:53:19.591524936Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 12 21:53:20.200545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2236765122.mount: Deactivated successfully. Feb 12 21:53:21.957398 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 12 21:53:21.957543 systemd[1]: Stopped kubelet.service. Feb 12 21:53:21.958861 systemd[1]: Started kubelet.service. Feb 12 21:53:23.230248 kubelet[1708]: E0212 21:53:23.230211 1708 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 21:53:23.231674 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 21:53:23.231760 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 21:53:23.740511 env[1242]: time="2024-02-12T21:53:23.740484018Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:23.755921 env[1242]: time="2024-02-12T21:53:23.755893052Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:23.761974 env[1242]: time="2024-02-12T21:53:23.761957452Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:23.766681 env[1242]: time="2024-02-12T21:53:23.766655566Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:23.767333 env[1242]: time="2024-02-12T21:53:23.767311142Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 12 21:53:23.794557 env[1242]: time="2024-02-12T21:53:23.794530767Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 12 21:53:24.324885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1258710703.mount: Deactivated successfully. Feb 12 21:53:25.385923 env[1242]: time="2024-02-12T21:53:25.385896984Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:25.387193 env[1242]: time="2024-02-12T21:53:25.387180401Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:25.388028 env[1242]: time="2024-02-12T21:53:25.388013891Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:25.389055 env[1242]: time="2024-02-12T21:53:25.389040354Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:25.389810 env[1242]: time="2024-02-12T21:53:25.389482919Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 12 21:53:27.154132 systemd[1]: Stopped kubelet.service. Feb 12 21:53:27.163099 systemd[1]: Reloading. Feb 12 21:53:27.215657 /usr/lib/systemd/system-generators/torcx-generator[1796]: time="2024-02-12T21:53:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 21:53:27.215678 /usr/lib/systemd/system-generators/torcx-generator[1796]: time="2024-02-12T21:53:27Z" level=info msg="torcx already run" Feb 12 21:53:27.280084 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 21:53:27.280204 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 21:53:27.292274 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 21:53:27.345549 systemd[1]: Started kubelet.service. Feb 12 21:53:27.450094 kubelet[1862]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 21:53:27.450094 kubelet[1862]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 21:53:27.454572 kubelet[1862]: I0212 21:53:27.454541 1862 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 21:53:27.471745 kubelet[1862]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 21:53:27.471745 kubelet[1862]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 21:53:27.804578 kubelet[1862]: I0212 21:53:27.804556 1862 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 21:53:27.804731 kubelet[1862]: I0212 21:53:27.804721 1862 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 21:53:27.804948 kubelet[1862]: I0212 21:53:27.804938 1862 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 21:53:27.900183 kubelet[1862]: I0212 21:53:27.900160 1862 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 21:53:27.900381 kubelet[1862]: E0212 21:53:27.900360 1862 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.99:6443: connect: connection refused Feb 12 21:53:27.905675 kubelet[1862]: I0212 21:53:27.905648 1862 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 21:53:27.907162 kubelet[1862]: I0212 21:53:27.907141 1862 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 21:53:27.907216 kubelet[1862]: I0212 21:53:27.907207 1862 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 21:53:27.908865 kubelet[1862]: I0212 21:53:27.908845 1862 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 21:53:27.908926 kubelet[1862]: I0212 21:53:27.908870 1862 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 21:53:27.909984 kubelet[1862]: I0212 21:53:27.909966 1862 state_mem.go:36] "Initialized new in-memory state store" Feb 12 21:53:27.920016 kubelet[1862]: W0212 21:53:27.919979 1862 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Feb 12 21:53:27.920085 kubelet[1862]: E0212 21:53:27.920023 1862 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Feb 12 21:53:27.920194 kubelet[1862]: I0212 21:53:27.920178 1862 kubelet.go:398] "Attempting to sync node with API server" Feb 12 21:53:27.920237 kubelet[1862]: I0212 21:53:27.920198 1862 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 21:53:27.920237 kubelet[1862]: I0212 21:53:27.920225 1862 kubelet.go:297] "Adding apiserver pod source" Feb 12 21:53:27.920237 kubelet[1862]: I0212 21:53:27.920237 1862 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 21:53:27.923849 kubelet[1862]: I0212 21:53:27.923834 1862 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 21:53:27.924727 kubelet[1862]: W0212 21:53:27.924700 1862 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://139.178.70.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Feb 12 21:53:27.924771 kubelet[1862]: E0212 21:53:27.924729 1862 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Feb 12 21:53:27.926855 kubelet[1862]: W0212 21:53:27.926838 1862 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 21:53:27.931617 kubelet[1862]: I0212 21:53:27.931594 1862 server.go:1186] "Started kubelet" Feb 12 21:53:27.933282 kubelet[1862]: E0212 21:53:27.933266 1862 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 21:53:27.933335 kubelet[1862]: E0212 21:53:27.933287 1862 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 21:53:27.935571 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 21:53:27.935738 kubelet[1862]: I0212 21:53:27.935702 1862 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 21:53:27.936192 kubelet[1862]: I0212 21:53:27.936184 1862 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 21:53:27.938173 kubelet[1862]: I0212 21:53:27.938159 1862 server.go:451] "Adding debug handlers to kubelet server" Feb 12 21:53:27.941988 kubelet[1862]: I0212 21:53:27.941975 1862 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 21:53:27.944717 kubelet[1862]: I0212 21:53:27.944704 1862 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 21:53:27.945284 kubelet[1862]: W0212 21:53:27.945256 1862 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://139.178.70.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Feb 12 21:53:27.945321 kubelet[1862]: E0212 21:53:27.945300 1862 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Feb 12 21:53:27.945358 kubelet[1862]: E0212 21:53:27.945344 1862 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://139.178.70.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 139.178.70.99:6443: connect: connection refused Feb 12 21:53:27.945440 kubelet[1862]: E0212 21:53:27.945378 1862 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b33c23bf1f6c38", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 53, 27, 931579448, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 53, 27, 931579448, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://139.178.70.99:6443/api/v1/namespaces/default/events": dial tcp 139.178.70.99:6443: connect: connection refused'(may retry after sleeping) Feb 12 21:53:27.974166 kubelet[1862]: I0212 21:53:27.974148 1862 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 21:53:27.983131 kubelet[1862]: I0212 21:53:27.983118 1862 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 21:53:27.983245 kubelet[1862]: I0212 21:53:27.983238 1862 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 21:53:27.983306 kubelet[1862]: I0212 21:53:27.983294 1862 state_mem.go:36] "Initialized new in-memory state store" Feb 12 21:53:27.984708 kubelet[1862]: I0212 21:53:27.984694 1862 policy_none.go:49] "None policy: Start" Feb 12 21:53:27.985002 kubelet[1862]: I0212 21:53:27.984992 1862 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 21:53:27.985034 kubelet[1862]: I0212 21:53:27.985007 1862 state_mem.go:35] "Initializing new in-memory state store" Feb 12 21:53:27.989475 kubelet[1862]: I0212 21:53:27.989453 1862 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 21:53:27.989619 kubelet[1862]: I0212 21:53:27.989591 1862 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 21:53:27.991752 kubelet[1862]: E0212 21:53:27.991741 1862 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 12 21:53:27.999561 kubelet[1862]: I0212 21:53:27.999550 1862 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 21:53:27.999653 kubelet[1862]: I0212 21:53:27.999646 1862 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 21:53:27.999712 kubelet[1862]: I0212 21:53:27.999705 1862 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 21:53:27.999776 kubelet[1862]: E0212 21:53:27.999770 1862 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 21:53:28.000107 kubelet[1862]: W0212 21:53:28.000084 1862 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://139.178.70.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Feb 12 21:53:28.000171 kubelet[1862]: E0212 21:53:28.000163 1862 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Feb 12 21:53:28.044704 kubelet[1862]: I0212 21:53:28.044690 1862 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 21:53:28.045009 kubelet[1862]: E0212 21:53:28.044995 1862 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.70.99:6443/api/v1/nodes\": dial tcp 139.178.70.99:6443: connect: connection refused" node="localhost" Feb 12 21:53:28.101665 kubelet[1862]: I0212 21:53:28.100348 1862 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:53:28.104461 kubelet[1862]: I0212 21:53:28.104450 1862 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:53:28.106039 kubelet[1862]: I0212 21:53:28.106023 1862 status_manager.go:698] "Failed to get status for pod" podUID=674dbcac8111a63ee8131c191379aa8a pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.99:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.99:6443: connect: connection refused" Feb 12 21:53:28.108672 kubelet[1862]: I0212 21:53:28.108656 1862 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:53:28.110414 kubelet[1862]: I0212 21:53:28.110375 1862 status_manager.go:698] "Failed to get status for pod" podUID=550020dd9f101bcc23e1d3c651841c4d pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.99:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.99:6443: connect: connection refused" Feb 12 21:53:28.114440 kubelet[1862]: I0212 21:53:28.114420 1862 status_manager.go:698] "Failed to get status for pod" podUID=72ae17a74a2eae76daac6d298477aff0 pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.99:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.99:6443: connect: connection refused" Feb 12 21:53:28.146127 kubelet[1862]: E0212 21:53:28.146094 1862 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://139.178.70.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 139.178.70.99:6443: connect: connection refused Feb 12 21:53:28.246284 kubelet[1862]: I0212 21:53:28.246261 1862 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 21:53:28.246568 kubelet[1862]: E0212 21:53:28.246550 1862 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.70.99:6443/api/v1/nodes\": dial tcp 139.178.70.99:6443: connect: connection refused" node="localhost" Feb 12 21:53:28.249454 kubelet[1862]: I0212 21:53:28.249444 1862 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/674dbcac8111a63ee8131c191379aa8a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"674dbcac8111a63ee8131c191379aa8a\") " pod="kube-system/kube-apiserver-localhost" Feb 12 21:53:28.249522 kubelet[1862]: I0212 21:53:28.249514 1862 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/674dbcac8111a63ee8131c191379aa8a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"674dbcac8111a63ee8131c191379aa8a\") " pod="kube-system/kube-apiserver-localhost" Feb 12 21:53:28.249582 kubelet[1862]: I0212 21:53:28.249575 1862 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 21:53:28.249654 kubelet[1862]: I0212 21:53:28.249647 1862 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 21:53:28.249709 kubelet[1862]: I0212 21:53:28.249703 1862 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 21:53:28.249766 kubelet[1862]: I0212 21:53:28.249759 1862 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/674dbcac8111a63ee8131c191379aa8a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"674dbcac8111a63ee8131c191379aa8a\") " pod="kube-system/kube-apiserver-localhost" Feb 12 21:53:28.249818 kubelet[1862]: I0212 21:53:28.249812 1862 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 21:53:28.249878 kubelet[1862]: I0212 21:53:28.249871 1862 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 21:53:28.249932 kubelet[1862]: I0212 21:53:28.249925 1862 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 12 21:53:28.410171 env[1242]: time="2024-02-12T21:53:28.409776401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,}" Feb 12 21:53:28.411206 env[1242]: time="2024-02-12T21:53:28.411005041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:674dbcac8111a63ee8131c191379aa8a,Namespace:kube-system,Attempt:0,}" Feb 12 21:53:28.415279 env[1242]: time="2024-02-12T21:53:28.415140550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,}" Feb 12 21:53:28.547130 kubelet[1862]: E0212 21:53:28.547108 1862 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://139.178.70.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 139.178.70.99:6443: connect: connection refused Feb 12 21:53:28.647684 kubelet[1862]: I0212 21:53:28.647667 1862 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 21:53:28.648007 kubelet[1862]: E0212 21:53:28.647997 1862 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.70.99:6443/api/v1/nodes\": dial tcp 139.178.70.99:6443: connect: connection refused" node="localhost" Feb 12 21:53:28.876393 kubelet[1862]: W0212 21:53:28.876345 1862 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://139.178.70.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Feb 12 21:53:28.876393 kubelet[1862]: E0212 21:53:28.876379 1862 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Feb 12 21:53:28.888484 env[1242]: time="2024-02-12T21:53:28.887895885Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:28.888202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4151157705.mount: Deactivated successfully. Feb 12 21:53:28.889189 env[1242]: time="2024-02-12T21:53:28.889159922Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:28.890906 env[1242]: time="2024-02-12T21:53:28.890890768Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:28.891415 env[1242]: time="2024-02-12T21:53:28.891398516Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:28.893624 env[1242]: time="2024-02-12T21:53:28.893608289Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:28.895836 env[1242]: time="2024-02-12T21:53:28.895818414Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:28.911195 env[1242]: time="2024-02-12T21:53:28.907659305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:53:28.911195 env[1242]: time="2024-02-12T21:53:28.907695760Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:53:28.911195 env[1242]: time="2024-02-12T21:53:28.907708882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:53:28.911195 env[1242]: time="2024-02-12T21:53:28.907838284Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/46392893a697bc87c61515045d2b0f93822898cb625c553da698cb6e8475409f pid=1937 runtime=io.containerd.runc.v2 Feb 12 21:53:28.912411 env[1242]: time="2024-02-12T21:53:28.912396341Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:28.913938 env[1242]: time="2024-02-12T21:53:28.912823239Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:28.913938 env[1242]: time="2024-02-12T21:53:28.913206789Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:28.913938 env[1242]: time="2024-02-12T21:53:28.913555556Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:28.914085 env[1242]: time="2024-02-12T21:53:28.914068343Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:28.914583 env[1242]: time="2024-02-12T21:53:28.914566554Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:28.938111 env[1242]: time="2024-02-12T21:53:28.938060201Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:53:28.938489 env[1242]: time="2024-02-12T21:53:28.938100570Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:53:28.938489 env[1242]: time="2024-02-12T21:53:28.938108272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:53:28.938489 env[1242]: time="2024-02-12T21:53:28.938218101Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/021c6ef18a416498c0cc668dca3f6519795f53b132f52d41912628f6de9ac23a pid=1964 runtime=io.containerd.runc.v2 Feb 12 21:53:28.974083 env[1242]: time="2024-02-12T21:53:28.974046932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,} returns sandbox id \"46392893a697bc87c61515045d2b0f93822898cb625c553da698cb6e8475409f\"" Feb 12 21:53:28.976479 env[1242]: time="2024-02-12T21:53:28.976456957Z" level=info msg="CreateContainer within sandbox \"46392893a697bc87c61515045d2b0f93822898cb625c553da698cb6e8475409f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 12 21:53:28.978422 env[1242]: time="2024-02-12T21:53:28.977258566Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:53:28.978422 env[1242]: time="2024-02-12T21:53:28.977318529Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:53:28.978422 env[1242]: time="2024-02-12T21:53:28.977327525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:53:28.978422 env[1242]: time="2024-02-12T21:53:28.977442822Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/de2a920ce129ef5f4cdde099a872046f41365191841fddbea190a8ec5225c28d pid=2016 runtime=io.containerd.runc.v2 Feb 12 21:53:28.986042 env[1242]: time="2024-02-12T21:53:28.986003812Z" level=info msg="CreateContainer within sandbox \"46392893a697bc87c61515045d2b0f93822898cb625c553da698cb6e8475409f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5549f3b2c113498dda03a3568b5926cc42ee986222cbb3e3c932adf1e3748ea0\"" Feb 12 21:53:28.986484 env[1242]: time="2024-02-12T21:53:28.986469414Z" level=info msg="StartContainer for \"5549f3b2c113498dda03a3568b5926cc42ee986222cbb3e3c932adf1e3748ea0\"" Feb 12 21:53:28.999675 env[1242]: time="2024-02-12T21:53:28.999642950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:674dbcac8111a63ee8131c191379aa8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"021c6ef18a416498c0cc668dca3f6519795f53b132f52d41912628f6de9ac23a\"" Feb 12 21:53:29.003393 env[1242]: time="2024-02-12T21:53:29.001789098Z" level=info msg="CreateContainer within sandbox \"021c6ef18a416498c0cc668dca3f6519795f53b132f52d41912628f6de9ac23a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 12 21:53:29.022153 env[1242]: time="2024-02-12T21:53:29.022121477Z" level=info msg="CreateContainer within sandbox \"021c6ef18a416498c0cc668dca3f6519795f53b132f52d41912628f6de9ac23a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0ce2f595aaad2f2151edc730504e6ad07d563dba804b71db63a27850f1529561\"" Feb 12 21:53:29.022750 env[1242]: time="2024-02-12T21:53:29.022731680Z" level=info msg="StartContainer for \"0ce2f595aaad2f2151edc730504e6ad07d563dba804b71db63a27850f1529561\"" Feb 12 21:53:29.034720 env[1242]: time="2024-02-12T21:53:29.034677159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"de2a920ce129ef5f4cdde099a872046f41365191841fddbea190a8ec5225c28d\"" Feb 12 21:53:29.045754 env[1242]: time="2024-02-12T21:53:29.045723274Z" level=info msg="CreateContainer within sandbox \"de2a920ce129ef5f4cdde099a872046f41365191841fddbea190a8ec5225c28d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 12 21:53:29.076470 env[1242]: time="2024-02-12T21:53:29.076433435Z" level=info msg="StartContainer for \"5549f3b2c113498dda03a3568b5926cc42ee986222cbb3e3c932adf1e3748ea0\" returns successfully" Feb 12 21:53:29.076616 env[1242]: time="2024-02-12T21:53:29.076429377Z" level=info msg="CreateContainer within sandbox \"de2a920ce129ef5f4cdde099a872046f41365191841fddbea190a8ec5225c28d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8f647b611b6efe3fc904b424c9184eacbfea2f94ec6759a0091204309fbf72ee\"" Feb 12 21:53:29.076995 env[1242]: time="2024-02-12T21:53:29.076979157Z" level=info msg="StartContainer for \"8f647b611b6efe3fc904b424c9184eacbfea2f94ec6759a0091204309fbf72ee\"" Feb 12 21:53:29.097866 env[1242]: time="2024-02-12T21:53:29.097812128Z" level=info msg="StartContainer for \"0ce2f595aaad2f2151edc730504e6ad07d563dba804b71db63a27850f1529561\" returns successfully" Feb 12 21:53:29.130776 env[1242]: time="2024-02-12T21:53:29.129985340Z" level=info msg="StartContainer for \"8f647b611b6efe3fc904b424c9184eacbfea2f94ec6759a0091204309fbf72ee\" returns successfully" Feb 12 21:53:29.290239 kubelet[1862]: W0212 21:53:29.290202 1862 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://139.178.70.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Feb 12 21:53:29.290239 kubelet[1862]: E0212 21:53:29.290240 1862 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Feb 12 21:53:29.307621 kubelet[1862]: W0212 21:53:29.307580 1862 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Feb 12 21:53:29.307621 kubelet[1862]: E0212 21:53:29.307623 1862 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Feb 12 21:53:29.308830 kubelet[1862]: W0212 21:53:29.308784 1862 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://139.178.70.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Feb 12 21:53:29.308830 kubelet[1862]: E0212 21:53:29.308807 1862 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Feb 12 21:53:29.348306 kubelet[1862]: E0212 21:53:29.348278 1862 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://139.178.70.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 139.178.70.99:6443: connect: connection refused Feb 12 21:53:29.449556 kubelet[1862]: I0212 21:53:29.449496 1862 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 21:53:29.449859 kubelet[1862]: E0212 21:53:29.449671 1862 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.70.99:6443/api/v1/nodes\": dial tcp 139.178.70.99:6443: connect: connection refused" node="localhost" Feb 12 21:53:30.006074 kubelet[1862]: I0212 21:53:30.006042 1862 status_manager.go:698] "Failed to get status for pod" podUID=550020dd9f101bcc23e1d3c651841c4d pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.99:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.99:6443: connect: connection refused" Feb 12 21:53:30.008255 kubelet[1862]: I0212 21:53:30.008245 1862 status_manager.go:698] "Failed to get status for pod" podUID=674dbcac8111a63ee8131c191379aa8a pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.99:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.99:6443: connect: connection refused" Feb 12 21:53:30.063400 kubelet[1862]: E0212 21:53:30.063379 1862 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.99:6443: connect: connection refused Feb 12 21:53:31.050637 kubelet[1862]: I0212 21:53:31.050620 1862 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 21:53:31.537780 kubelet[1862]: E0212 21:53:31.537760 1862 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 12 21:53:31.619144 kubelet[1862]: I0212 21:53:31.619112 1862 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 12 21:53:31.628980 kubelet[1862]: E0212 21:53:31.628956 1862 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 21:53:31.729814 kubelet[1862]: E0212 21:53:31.729792 1862 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 21:53:31.830500 kubelet[1862]: E0212 21:53:31.830438 1862 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 21:53:31.930898 kubelet[1862]: E0212 21:53:31.930869 1862 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 21:53:32.031327 kubelet[1862]: E0212 21:53:32.031292 1862 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 21:53:32.132269 kubelet[1862]: E0212 21:53:32.132112 1862 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 21:53:32.232859 kubelet[1862]: E0212 21:53:32.232703 1862 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 21:53:32.333547 kubelet[1862]: E0212 21:53:32.333516 1862 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 21:53:32.434383 kubelet[1862]: E0212 21:53:32.434307 1862 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 21:53:32.535218 kubelet[1862]: E0212 21:53:32.535199 1862 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 21:53:32.635807 kubelet[1862]: E0212 21:53:32.635787 1862 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 21:53:32.923739 kubelet[1862]: I0212 21:53:32.923713 1862 apiserver.go:52] "Watching apiserver" Feb 12 21:53:32.945741 kubelet[1862]: I0212 21:53:32.945722 1862 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 21:53:32.972903 kubelet[1862]: I0212 21:53:32.972889 1862 reconciler.go:41] "Reconciler: start to sync state" Feb 12 21:53:33.907559 systemd[1]: Reloading. Feb 12 21:53:33.969120 /usr/lib/systemd/system-generators/torcx-generator[2185]: time="2024-02-12T21:53:33Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 21:53:33.969345 /usr/lib/systemd/system-generators/torcx-generator[2185]: time="2024-02-12T21:53:33Z" level=info msg="torcx already run" Feb 12 21:53:34.034450 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 21:53:34.034571 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 21:53:34.047828 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 21:53:34.113590 systemd[1]: Stopping kubelet.service... Feb 12 21:53:34.129193 systemd[1]: kubelet.service: Deactivated successfully. Feb 12 21:53:34.129500 systemd[1]: Stopped kubelet.service. Feb 12 21:53:34.133550 systemd[1]: Started kubelet.service. Feb 12 21:53:34.215750 kubelet[2252]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 21:53:34.215750 kubelet[2252]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 21:53:34.216011 kubelet[2252]: I0212 21:53:34.215773 2252 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 21:53:34.216546 kubelet[2252]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 21:53:34.216546 kubelet[2252]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 21:53:34.218357 kubelet[2252]: I0212 21:53:34.218347 2252 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 21:53:34.218442 kubelet[2252]: I0212 21:53:34.218435 2252 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 21:53:34.218585 kubelet[2252]: I0212 21:53:34.218578 2252 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 21:53:34.219328 kubelet[2252]: I0212 21:53:34.219320 2252 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 12 21:53:34.219828 kubelet[2252]: I0212 21:53:34.219816 2252 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 21:53:34.221749 kubelet[2252]: I0212 21:53:34.221739 2252 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 21:53:34.222027 kubelet[2252]: I0212 21:53:34.222020 2252 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 21:53:34.222105 kubelet[2252]: I0212 21:53:34.222097 2252 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 21:53:34.222200 kubelet[2252]: I0212 21:53:34.222192 2252 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 21:53:34.222247 kubelet[2252]: I0212 21:53:34.222241 2252 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 21:53:34.222308 kubelet[2252]: I0212 21:53:34.222301 2252 state_mem.go:36] "Initialized new in-memory state store" Feb 12 21:53:34.223955 kubelet[2252]: I0212 21:53:34.223939 2252 kubelet.go:398] "Attempting to sync node with API server" Feb 12 21:53:34.223993 kubelet[2252]: I0212 21:53:34.223959 2252 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 21:53:34.223993 kubelet[2252]: I0212 21:53:34.223978 2252 kubelet.go:297] "Adding apiserver pod source" Feb 12 21:53:34.223993 kubelet[2252]: I0212 21:53:34.223987 2252 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 21:53:34.226685 kubelet[2252]: I0212 21:53:34.225051 2252 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 21:53:34.226685 kubelet[2252]: I0212 21:53:34.225307 2252 server.go:1186] "Started kubelet" Feb 12 21:53:34.228651 kubelet[2252]: I0212 21:53:34.227072 2252 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 21:53:34.233530 kubelet[2252]: I0212 21:53:34.233516 2252 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 21:53:34.234285 kubelet[2252]: I0212 21:53:34.234272 2252 server.go:451] "Adding debug handlers to kubelet server" Feb 12 21:53:34.234876 kubelet[2252]: E0212 21:53:34.234858 2252 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 21:53:34.234917 kubelet[2252]: E0212 21:53:34.234883 2252 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 21:53:34.237984 kubelet[2252]: I0212 21:53:34.236241 2252 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 21:53:34.237984 kubelet[2252]: I0212 21:53:34.236400 2252 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 21:53:34.265668 kubelet[2252]: I0212 21:53:34.265648 2252 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 21:53:34.273890 kubelet[2252]: I0212 21:53:34.273875 2252 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 21:53:34.273890 kubelet[2252]: I0212 21:53:34.273887 2252 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 21:53:34.273982 kubelet[2252]: I0212 21:53:34.273898 2252 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 21:53:34.273982 kubelet[2252]: E0212 21:53:34.273922 2252 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 21:53:34.304410 sudo[2303]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 12 21:53:34.304534 sudo[2303]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 12 21:53:34.318760 kubelet[2252]: I0212 21:53:34.318744 2252 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 21:53:34.318857 kubelet[2252]: I0212 21:53:34.318850 2252 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 21:53:34.318907 kubelet[2252]: I0212 21:53:34.318900 2252 state_mem.go:36] "Initialized new in-memory state store" Feb 12 21:53:34.319035 kubelet[2252]: I0212 21:53:34.319028 2252 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 12 21:53:34.319087 kubelet[2252]: I0212 21:53:34.319081 2252 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 12 21:53:34.319130 kubelet[2252]: I0212 21:53:34.319123 2252 policy_none.go:49] "None policy: Start" Feb 12 21:53:34.319508 kubelet[2252]: I0212 21:53:34.319495 2252 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 21:53:34.319508 kubelet[2252]: I0212 21:53:34.319508 2252 state_mem.go:35] "Initializing new in-memory state store" Feb 12 21:53:34.319594 kubelet[2252]: I0212 21:53:34.319585 2252 state_mem.go:75] "Updated machine memory state" Feb 12 21:53:34.320776 kubelet[2252]: I0212 21:53:34.320765 2252 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 21:53:34.322311 kubelet[2252]: I0212 21:53:34.321210 2252 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 21:53:34.337442 kubelet[2252]: I0212 21:53:34.337429 2252 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 21:53:34.347714 kubelet[2252]: I0212 21:53:34.347691 2252 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 12 21:53:34.347821 kubelet[2252]: I0212 21:53:34.347761 2252 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 12 21:53:34.374299 kubelet[2252]: I0212 21:53:34.374271 2252 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:53:34.374879 kubelet[2252]: I0212 21:53:34.374471 2252 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:53:34.374993 kubelet[2252]: I0212 21:53:34.374986 2252 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:53:34.377916 kubelet[2252]: E0212 21:53:34.377896 2252 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 12 21:53:34.537237 kubelet[2252]: I0212 21:53:34.537169 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/674dbcac8111a63ee8131c191379aa8a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"674dbcac8111a63ee8131c191379aa8a\") " pod="kube-system/kube-apiserver-localhost" Feb 12 21:53:34.537237 kubelet[2252]: I0212 21:53:34.537203 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 21:53:34.537237 kubelet[2252]: I0212 21:53:34.537231 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 21:53:34.537357 kubelet[2252]: I0212 21:53:34.537244 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 21:53:34.537357 kubelet[2252]: I0212 21:53:34.537256 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 12 21:53:34.537357 kubelet[2252]: I0212 21:53:34.537269 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/674dbcac8111a63ee8131c191379aa8a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"674dbcac8111a63ee8131c191379aa8a\") " pod="kube-system/kube-apiserver-localhost" Feb 12 21:53:34.537357 kubelet[2252]: I0212 21:53:34.537280 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/674dbcac8111a63ee8131c191379aa8a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"674dbcac8111a63ee8131c191379aa8a\") " pod="kube-system/kube-apiserver-localhost" Feb 12 21:53:34.537357 kubelet[2252]: I0212 21:53:34.537298 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 21:53:34.537446 kubelet[2252]: I0212 21:53:34.537311 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 21:53:34.792524 sudo[2303]: pam_unix(sudo:session): session closed for user root Feb 12 21:53:35.224667 kubelet[2252]: I0212 21:53:35.224635 2252 apiserver.go:52] "Watching apiserver" Feb 12 21:53:35.237137 kubelet[2252]: I0212 21:53:35.237120 2252 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 21:53:35.241269 kubelet[2252]: I0212 21:53:35.241250 2252 reconciler.go:41] "Reconciler: start to sync state" Feb 12 21:53:35.641379 kubelet[2252]: E0212 21:53:35.641307 2252 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 12 21:53:35.836403 kubelet[2252]: E0212 21:53:35.836376 2252 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 12 21:53:36.032683 kubelet[2252]: E0212 21:53:36.032656 2252 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 12 21:53:36.628691 kubelet[2252]: I0212 21:53:36.628666 2252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.627528199 pod.CreationTimestamp="2024-02-12 21:53:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:53:36.627458445 +0000 UTC m=+2.444460601" watchObservedRunningTime="2024-02-12 21:53:36.627528199 +0000 UTC m=+2.444530350" Feb 12 21:53:36.628973 kubelet[2252]: I0212 21:53:36.628736 2252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.628721751 pod.CreationTimestamp="2024-02-12 21:53:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:53:36.234299409 +0000 UTC m=+2.051301566" watchObservedRunningTime="2024-02-12 21:53:36.628721751 +0000 UTC m=+2.445723910" Feb 12 21:53:36.738195 sudo[1425]: pam_unix(sudo:session): session closed for user root Feb 12 21:53:36.740038 sshd[1419]: pam_unix(sshd:session): session closed for user core Feb 12 21:53:36.741917 systemd[1]: sshd@4-139.178.70.99:22-139.178.89.65:50712.service: Deactivated successfully. Feb 12 21:53:36.742400 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 21:53:36.742682 systemd-logind[1224]: Session 7 logged out. Waiting for processes to exit. Feb 12 21:53:36.743204 systemd-logind[1224]: Removed session 7. Feb 12 21:53:37.033086 kubelet[2252]: I0212 21:53:37.032984 2252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.032962304 pod.CreationTimestamp="2024-02-12 21:53:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:53:37.02864446 +0000 UTC m=+2.845646618" watchObservedRunningTime="2024-02-12 21:53:37.032962304 +0000 UTC m=+2.849964464" Feb 12 21:53:47.764130 kubelet[2252]: I0212 21:53:47.764113 2252 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 12 21:53:47.764646 env[1242]: time="2024-02-12T21:53:47.764621209Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 21:53:47.764803 kubelet[2252]: I0212 21:53:47.764747 2252 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 12 21:53:48.109415 kubelet[2252]: I0212 21:53:48.109331 2252 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:53:48.109616 kubelet[2252]: I0212 21:53:48.109595 2252 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:53:48.305720 kubelet[2252]: I0212 21:53:48.305693 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d88bf497-97fa-47f1-8364-d2d71c3a9bfc-kube-proxy\") pod \"kube-proxy-nnbpt\" (UID: \"d88bf497-97fa-47f1-8364-d2d71c3a9bfc\") " pod="kube-system/kube-proxy-nnbpt" Feb 12 21:53:48.305720 kubelet[2252]: I0212 21:53:48.305725 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-host-proc-sys-net\") pod \"cilium-pbq6f\" (UID: \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\") " pod="kube-system/cilium-pbq6f" Feb 12 21:53:48.305857 kubelet[2252]: I0212 21:53:48.305739 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d88bf497-97fa-47f1-8364-d2d71c3a9bfc-xtables-lock\") pod \"kube-proxy-nnbpt\" (UID: \"d88bf497-97fa-47f1-8364-d2d71c3a9bfc\") " pod="kube-system/kube-proxy-nnbpt" Feb 12 21:53:48.305857 kubelet[2252]: I0212 21:53:48.305751 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d88bf497-97fa-47f1-8364-d2d71c3a9bfc-lib-modules\") pod \"kube-proxy-nnbpt\" (UID: \"d88bf497-97fa-47f1-8364-d2d71c3a9bfc\") " pod="kube-system/kube-proxy-nnbpt" Feb 12 21:53:48.305857 kubelet[2252]: I0212 21:53:48.305765 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6tdv\" (UniqueName: \"kubernetes.io/projected/d88bf497-97fa-47f1-8364-d2d71c3a9bfc-kube-api-access-x6tdv\") pod \"kube-proxy-nnbpt\" (UID: \"d88bf497-97fa-47f1-8364-d2d71c3a9bfc\") " pod="kube-system/kube-proxy-nnbpt" Feb 12 21:53:48.305857 kubelet[2252]: I0212 21:53:48.305776 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-cilium-cgroup\") pod \"cilium-pbq6f\" (UID: \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\") " pod="kube-system/cilium-pbq6f" Feb 12 21:53:48.305857 kubelet[2252]: I0212 21:53:48.305787 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-host-proc-sys-kernel\") pod \"cilium-pbq6f\" (UID: \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\") " pod="kube-system/cilium-pbq6f" Feb 12 21:53:48.305949 kubelet[2252]: I0212 21:53:48.305798 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-hubble-tls\") pod \"cilium-pbq6f\" (UID: \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\") " pod="kube-system/cilium-pbq6f" Feb 12 21:53:48.305949 kubelet[2252]: I0212 21:53:48.305811 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-clustermesh-secrets\") pod \"cilium-pbq6f\" (UID: \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\") " pod="kube-system/cilium-pbq6f" Feb 12 21:53:48.305949 kubelet[2252]: I0212 21:53:48.305823 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-cilium-config-path\") pod \"cilium-pbq6f\" (UID: \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\") " pod="kube-system/cilium-pbq6f" Feb 12 21:53:48.305949 kubelet[2252]: I0212 21:53:48.305836 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-lib-modules\") pod \"cilium-pbq6f\" (UID: \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\") " pod="kube-system/cilium-pbq6f" Feb 12 21:53:48.305949 kubelet[2252]: I0212 21:53:48.305847 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-hostproc\") pod \"cilium-pbq6f\" (UID: \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\") " pod="kube-system/cilium-pbq6f" Feb 12 21:53:48.305949 kubelet[2252]: I0212 21:53:48.305859 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-cni-path\") pod \"cilium-pbq6f\" (UID: \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\") " pod="kube-system/cilium-pbq6f" Feb 12 21:53:48.306054 kubelet[2252]: I0212 21:53:48.305870 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-etc-cni-netd\") pod \"cilium-pbq6f\" (UID: \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\") " pod="kube-system/cilium-pbq6f" Feb 12 21:53:48.306054 kubelet[2252]: I0212 21:53:48.305883 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-xtables-lock\") pod \"cilium-pbq6f\" (UID: \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\") " pod="kube-system/cilium-pbq6f" Feb 12 21:53:48.306054 kubelet[2252]: I0212 21:53:48.305895 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mccpp\" (UniqueName: \"kubernetes.io/projected/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-kube-api-access-mccpp\") pod \"cilium-pbq6f\" (UID: \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\") " pod="kube-system/cilium-pbq6f" Feb 12 21:53:48.306054 kubelet[2252]: I0212 21:53:48.305906 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-cilium-run\") pod \"cilium-pbq6f\" (UID: \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\") " pod="kube-system/cilium-pbq6f" Feb 12 21:53:48.306054 kubelet[2252]: I0212 21:53:48.305917 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-bpf-maps\") pod \"cilium-pbq6f\" (UID: \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\") " pod="kube-system/cilium-pbq6f" Feb 12 21:53:48.571571 kubelet[2252]: I0212 21:53:48.571547 2252 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:53:48.714538 env[1242]: time="2024-02-12T21:53:48.714256240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nnbpt,Uid:d88bf497-97fa-47f1-8364-d2d71c3a9bfc,Namespace:kube-system,Attempt:0,}" Feb 12 21:53:48.732215 kubelet[2252]: I0212 21:53:48.732194 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jfhb\" (UniqueName: \"kubernetes.io/projected/b5ccdbcd-fe44-4418-a186-a9a9a534702c-kube-api-access-7jfhb\") pod \"cilium-operator-f59cbd8c6-gkbxn\" (UID: \"b5ccdbcd-fe44-4418-a186-a9a9a534702c\") " pod="kube-system/cilium-operator-f59cbd8c6-gkbxn" Feb 12 21:53:48.732377 kubelet[2252]: I0212 21:53:48.732367 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b5ccdbcd-fe44-4418-a186-a9a9a534702c-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-gkbxn\" (UID: \"b5ccdbcd-fe44-4418-a186-a9a9a534702c\") " pod="kube-system/cilium-operator-f59cbd8c6-gkbxn" Feb 12 21:53:48.745982 env[1242]: time="2024-02-12T21:53:48.745829540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pbq6f,Uid:ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6,Namespace:kube-system,Attempt:0,}" Feb 12 21:53:48.982494 env[1242]: time="2024-02-12T21:53:48.982438602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:53:48.982717 env[1242]: time="2024-02-12T21:53:48.982494145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:53:48.982717 env[1242]: time="2024-02-12T21:53:48.982511136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:53:48.982856 env[1242]: time="2024-02-12T21:53:48.982821409Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a3ff4c75eb5c655bf63431f871e8e8c3b7ee8797b3ce0dc653c08b6c25c381e pid=2354 runtime=io.containerd.runc.v2 Feb 12 21:53:49.003425 env[1242]: time="2024-02-12T21:53:49.003384244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:53:49.003539 env[1242]: time="2024-02-12T21:53:49.003524378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:53:49.003628 env[1242]: time="2024-02-12T21:53:49.003612699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:53:49.003796 env[1242]: time="2024-02-12T21:53:49.003761166Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9a67c68620bfee1697c23b602f1d44844d74f82aee7fb2284f2a11cc6395d54b pid=2389 runtime=io.containerd.runc.v2 Feb 12 21:53:49.007687 env[1242]: time="2024-02-12T21:53:49.006330398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nnbpt,Uid:d88bf497-97fa-47f1-8364-d2d71c3a9bfc,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a3ff4c75eb5c655bf63431f871e8e8c3b7ee8797b3ce0dc653c08b6c25c381e\"" Feb 12 21:53:49.029276 env[1242]: time="2024-02-12T21:53:49.029243424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pbq6f,Uid:ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a67c68620bfee1697c23b602f1d44844d74f82aee7fb2284f2a11cc6395d54b\"" Feb 12 21:53:49.050492 env[1242]: time="2024-02-12T21:53:49.050465338Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 21:53:49.052820 env[1242]: time="2024-02-12T21:53:49.052726862Z" level=info msg="CreateContainer within sandbox \"7a3ff4c75eb5c655bf63431f871e8e8c3b7ee8797b3ce0dc653c08b6c25c381e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 21:53:49.190301 env[1242]: time="2024-02-12T21:53:49.190271143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-gkbxn,Uid:b5ccdbcd-fe44-4418-a186-a9a9a534702c,Namespace:kube-system,Attempt:0,}" Feb 12 21:53:49.273640 env[1242]: time="2024-02-12T21:53:49.273364348Z" level=info msg="CreateContainer within sandbox \"7a3ff4c75eb5c655bf63431f871e8e8c3b7ee8797b3ce0dc653c08b6c25c381e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fcce4f357aa6c95f1b9b037a7c09a73dd72a4576937b4713b8d884453e82d59f\"" Feb 12 21:53:49.274819 env[1242]: time="2024-02-12T21:53:49.274713606Z" level=info msg="StartContainer for \"fcce4f357aa6c95f1b9b037a7c09a73dd72a4576937b4713b8d884453e82d59f\"" Feb 12 21:53:49.301572 env[1242]: time="2024-02-12T21:53:49.301495008Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:53:49.301572 env[1242]: time="2024-02-12T21:53:49.301521148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:53:49.301572 env[1242]: time="2024-02-12T21:53:49.301528228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:53:49.301832 env[1242]: time="2024-02-12T21:53:49.301803786Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f6b7d50db6ae420ed8467d42a378ce8ae4eeffb80e9b3b3c78ae81964cbfdc94 pid=2463 runtime=io.containerd.runc.v2 Feb 12 21:53:49.322731 env[1242]: time="2024-02-12T21:53:49.322694364Z" level=info msg="StartContainer for \"fcce4f357aa6c95f1b9b037a7c09a73dd72a4576937b4713b8d884453e82d59f\" returns successfully" Feb 12 21:53:49.347957 env[1242]: time="2024-02-12T21:53:49.347916690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-gkbxn,Uid:b5ccdbcd-fe44-4418-a186-a9a9a534702c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6b7d50db6ae420ed8467d42a378ce8ae4eeffb80e9b3b3c78ae81964cbfdc94\"" Feb 12 21:53:50.324637 kubelet[2252]: I0212 21:53:50.324006 2252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-nnbpt" podStartSLOduration=2.323966899 pod.CreationTimestamp="2024-02-12 21:53:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:53:50.322982994 +0000 UTC m=+16.139985145" watchObservedRunningTime="2024-02-12 21:53:50.323966899 +0000 UTC m=+16.140969056" Feb 12 21:53:52.774378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1974680346.mount: Deactivated successfully. Feb 12 21:53:55.909300 env[1242]: time="2024-02-12T21:53:55.909251429Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:55.934229 env[1242]: time="2024-02-12T21:53:55.934206293Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:55.941714 env[1242]: time="2024-02-12T21:53:55.940824243Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:55.947754 env[1242]: time="2024-02-12T21:53:55.947736100Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 12 21:53:55.950296 env[1242]: time="2024-02-12T21:53:55.950215570Z" level=info msg="CreateContainer within sandbox \"9a67c68620bfee1697c23b602f1d44844d74f82aee7fb2284f2a11cc6395d54b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 21:53:55.950481 env[1242]: time="2024-02-12T21:53:55.950466465Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 21:53:56.035898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount397457857.mount: Deactivated successfully. Feb 12 21:53:56.044800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1638854349.mount: Deactivated successfully. Feb 12 21:53:56.071708 env[1242]: time="2024-02-12T21:53:56.071677515Z" level=info msg="CreateContainer within sandbox \"9a67c68620bfee1697c23b602f1d44844d74f82aee7fb2284f2a11cc6395d54b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4e794a24e40de42cfa60fc05d30d226dd8f74a43c95b80f34b2ff42d4f0a5134\"" Feb 12 21:53:56.072809 env[1242]: time="2024-02-12T21:53:56.072778006Z" level=info msg="StartContainer for \"4e794a24e40de42cfa60fc05d30d226dd8f74a43c95b80f34b2ff42d4f0a5134\"" Feb 12 21:53:56.112875 env[1242]: time="2024-02-12T21:53:56.112847418Z" level=info msg="StartContainer for \"4e794a24e40de42cfa60fc05d30d226dd8f74a43c95b80f34b2ff42d4f0a5134\" returns successfully" Feb 12 21:53:56.426689 env[1242]: time="2024-02-12T21:53:56.426647450Z" level=info msg="shim disconnected" id=4e794a24e40de42cfa60fc05d30d226dd8f74a43c95b80f34b2ff42d4f0a5134 Feb 12 21:53:56.426689 env[1242]: time="2024-02-12T21:53:56.426683724Z" level=warning msg="cleaning up after shim disconnected" id=4e794a24e40de42cfa60fc05d30d226dd8f74a43c95b80f34b2ff42d4f0a5134 namespace=k8s.io Feb 12 21:53:56.426689 env[1242]: time="2024-02-12T21:53:56.426692933Z" level=info msg="cleaning up dead shim" Feb 12 21:53:56.432850 env[1242]: time="2024-02-12T21:53:56.432816669Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:53:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2662 runtime=io.containerd.runc.v2\n" Feb 12 21:53:57.028878 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e794a24e40de42cfa60fc05d30d226dd8f74a43c95b80f34b2ff42d4f0a5134-rootfs.mount: Deactivated successfully. Feb 12 21:53:57.368568 env[1242]: time="2024-02-12T21:53:57.368350221Z" level=info msg="CreateContainer within sandbox \"9a67c68620bfee1697c23b602f1d44844d74f82aee7fb2284f2a11cc6395d54b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 21:53:57.393575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3737823484.mount: Deactivated successfully. Feb 12 21:53:57.395998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1046565127.mount: Deactivated successfully. Feb 12 21:53:57.399703 env[1242]: time="2024-02-12T21:53:57.399668080Z" level=info msg="CreateContainer within sandbox \"9a67c68620bfee1697c23b602f1d44844d74f82aee7fb2284f2a11cc6395d54b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9e26e92ac4faebdc74dd0071b5875eac50a8f0182db9d5a6be477adacf62e8d7\"" Feb 12 21:53:57.401141 env[1242]: time="2024-02-12T21:53:57.400801389Z" level=info msg="StartContainer for \"9e26e92ac4faebdc74dd0071b5875eac50a8f0182db9d5a6be477adacf62e8d7\"" Feb 12 21:53:57.438314 env[1242]: time="2024-02-12T21:53:57.438287203Z" level=info msg="StartContainer for \"9e26e92ac4faebdc74dd0071b5875eac50a8f0182db9d5a6be477adacf62e8d7\" returns successfully" Feb 12 21:53:57.448259 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 21:53:57.448416 systemd[1]: Stopped systemd-sysctl.service. Feb 12 21:53:57.448636 systemd[1]: Stopping systemd-sysctl.service... Feb 12 21:53:57.450514 systemd[1]: Starting systemd-sysctl.service... Feb 12 21:53:57.462198 systemd[1]: Finished systemd-sysctl.service. Feb 12 21:53:57.489523 env[1242]: time="2024-02-12T21:53:57.489489840Z" level=info msg="shim disconnected" id=9e26e92ac4faebdc74dd0071b5875eac50a8f0182db9d5a6be477adacf62e8d7 Feb 12 21:53:57.489523 env[1242]: time="2024-02-12T21:53:57.489520772Z" level=warning msg="cleaning up after shim disconnected" id=9e26e92ac4faebdc74dd0071b5875eac50a8f0182db9d5a6be477adacf62e8d7 namespace=k8s.io Feb 12 21:53:57.489523 env[1242]: time="2024-02-12T21:53:57.489526569Z" level=info msg="cleaning up dead shim" Feb 12 21:53:57.496796 env[1242]: time="2024-02-12T21:53:57.496766445Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:53:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2728 runtime=io.containerd.runc.v2\n" Feb 12 21:53:58.017266 env[1242]: time="2024-02-12T21:53:58.017236083Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:58.026809 env[1242]: time="2024-02-12T21:53:58.026783988Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:58.034247 env[1242]: time="2024-02-12T21:53:58.031608902Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:53:58.034247 env[1242]: time="2024-02-12T21:53:58.031826853Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 12 21:53:58.034247 env[1242]: time="2024-02-12T21:53:58.033984805Z" level=info msg="CreateContainer within sandbox \"f6b7d50db6ae420ed8467d42a378ce8ae4eeffb80e9b3b3c78ae81964cbfdc94\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 21:53:58.051470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4205154236.mount: Deactivated successfully. Feb 12 21:53:58.054876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1440915314.mount: Deactivated successfully. Feb 12 21:53:58.057147 env[1242]: time="2024-02-12T21:53:58.057125938Z" level=info msg="CreateContainer within sandbox \"f6b7d50db6ae420ed8467d42a378ce8ae4eeffb80e9b3b3c78ae81964cbfdc94\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a80f123eff9bd7aadfeba02fdce06fd44a86f0f26dbdab0194f22d2e9cbe827b\"" Feb 12 21:53:58.058383 env[1242]: time="2024-02-12T21:53:58.058134861Z" level=info msg="StartContainer for \"a80f123eff9bd7aadfeba02fdce06fd44a86f0f26dbdab0194f22d2e9cbe827b\"" Feb 12 21:53:58.084682 env[1242]: time="2024-02-12T21:53:58.084655760Z" level=info msg="StartContainer for \"a80f123eff9bd7aadfeba02fdce06fd44a86f0f26dbdab0194f22d2e9cbe827b\" returns successfully" Feb 12 21:53:58.333901 env[1242]: time="2024-02-12T21:53:58.333842692Z" level=info msg="CreateContainer within sandbox \"9a67c68620bfee1697c23b602f1d44844d74f82aee7fb2284f2a11cc6395d54b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 21:53:58.340879 env[1242]: time="2024-02-12T21:53:58.340854735Z" level=info msg="CreateContainer within sandbox \"9a67c68620bfee1697c23b602f1d44844d74f82aee7fb2284f2a11cc6395d54b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ff7d7116f7c7d7343ce74c5d29f6920a6ddf8c834ff0bdcc5435d62495a11b51\"" Feb 12 21:53:58.341277 env[1242]: time="2024-02-12T21:53:58.341261228Z" level=info msg="StartContainer for \"ff7d7116f7c7d7343ce74c5d29f6920a6ddf8c834ff0bdcc5435d62495a11b51\"" Feb 12 21:53:58.410947 env[1242]: time="2024-02-12T21:53:58.410920037Z" level=info msg="StartContainer for \"ff7d7116f7c7d7343ce74c5d29f6920a6ddf8c834ff0bdcc5435d62495a11b51\" returns successfully" Feb 12 21:53:58.719442 env[1242]: time="2024-02-12T21:53:58.719404728Z" level=info msg="shim disconnected" id=ff7d7116f7c7d7343ce74c5d29f6920a6ddf8c834ff0bdcc5435d62495a11b51 Feb 12 21:53:58.719628 env[1242]: time="2024-02-12T21:53:58.719616254Z" level=warning msg="cleaning up after shim disconnected" id=ff7d7116f7c7d7343ce74c5d29f6920a6ddf8c834ff0bdcc5435d62495a11b51 namespace=k8s.io Feb 12 21:53:58.719700 env[1242]: time="2024-02-12T21:53:58.719686293Z" level=info msg="cleaning up dead shim" Feb 12 21:53:58.733423 env[1242]: time="2024-02-12T21:53:58.733394929Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:53:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2817 runtime=io.containerd.runc.v2\n" Feb 12 21:53:59.337698 env[1242]: time="2024-02-12T21:53:59.337675052Z" level=info msg="CreateContainer within sandbox \"9a67c68620bfee1697c23b602f1d44844d74f82aee7fb2284f2a11cc6395d54b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 21:53:59.354755 env[1242]: time="2024-02-12T21:53:59.354724777Z" level=info msg="CreateContainer within sandbox \"9a67c68620bfee1697c23b602f1d44844d74f82aee7fb2284f2a11cc6395d54b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"038e1e904823c7730205675b3273917cc3a948cd1e7cc692ce16c3ef4a6da2b1\"" Feb 12 21:53:59.355173 kubelet[2252]: I0212 21:53:59.355154 2252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-gkbxn" podStartSLOduration=-9.223372025499672e+09 pod.CreationTimestamp="2024-02-12 21:53:48 +0000 UTC" firstStartedPulling="2024-02-12 21:53:49.348698352 +0000 UTC m=+15.165700503" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:53:58.358243093 +0000 UTC m=+24.175245250" watchObservedRunningTime="2024-02-12 21:53:59.355104022 +0000 UTC m=+25.172106180" Feb 12 21:53:59.355560 env[1242]: time="2024-02-12T21:53:59.355545439Z" level=info msg="StartContainer for \"038e1e904823c7730205675b3273917cc3a948cd1e7cc692ce16c3ef4a6da2b1\"" Feb 12 21:53:59.394957 env[1242]: time="2024-02-12T21:53:59.394926332Z" level=info msg="StartContainer for \"038e1e904823c7730205675b3273917cc3a948cd1e7cc692ce16c3ef4a6da2b1\" returns successfully" Feb 12 21:53:59.410175 env[1242]: time="2024-02-12T21:53:59.410127027Z" level=info msg="shim disconnected" id=038e1e904823c7730205675b3273917cc3a948cd1e7cc692ce16c3ef4a6da2b1 Feb 12 21:53:59.410175 env[1242]: time="2024-02-12T21:53:59.410168565Z" level=warning msg="cleaning up after shim disconnected" id=038e1e904823c7730205675b3273917cc3a948cd1e7cc692ce16c3ef4a6da2b1 namespace=k8s.io Feb 12 21:53:59.410175 env[1242]: time="2024-02-12T21:53:59.410177596Z" level=info msg="cleaning up dead shim" Feb 12 21:53:59.416416 env[1242]: time="2024-02-12T21:53:59.416379528Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:53:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2875 runtime=io.containerd.runc.v2\n" Feb 12 21:54:00.029165 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-038e1e904823c7730205675b3273917cc3a948cd1e7cc692ce16c3ef4a6da2b1-rootfs.mount: Deactivated successfully. Feb 12 21:54:00.339222 env[1242]: time="2024-02-12T21:54:00.339045370Z" level=info msg="CreateContainer within sandbox \"9a67c68620bfee1697c23b602f1d44844d74f82aee7fb2284f2a11cc6395d54b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 21:54:00.346152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4288517682.mount: Deactivated successfully. Feb 12 21:54:00.349391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2667148447.mount: Deactivated successfully. Feb 12 21:54:00.354069 env[1242]: time="2024-02-12T21:54:00.354037638Z" level=info msg="CreateContainer within sandbox \"9a67c68620bfee1697c23b602f1d44844d74f82aee7fb2284f2a11cc6395d54b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d98856b9cd3f47e8bd6563143ba32226d9446770b45d996e5f23001e16088186\"" Feb 12 21:54:00.355540 env[1242]: time="2024-02-12T21:54:00.354555807Z" level=info msg="StartContainer for \"d98856b9cd3f47e8bd6563143ba32226d9446770b45d996e5f23001e16088186\"" Feb 12 21:54:00.392073 env[1242]: time="2024-02-12T21:54:00.392010713Z" level=info msg="StartContainer for \"d98856b9cd3f47e8bd6563143ba32226d9446770b45d996e5f23001e16088186\" returns successfully" Feb 12 21:54:00.593808 kubelet[2252]: I0212 21:54:00.593654 2252 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 21:54:00.616248 kubelet[2252]: I0212 21:54:00.616222 2252 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:54:00.621779 kubelet[2252]: I0212 21:54:00.621761 2252 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:54:00.711617 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 12 21:54:00.815822 kubelet[2252]: I0212 21:54:00.815803 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e66dc7c5-57d2-448e-bb82-49a32dea8397-config-volume\") pod \"coredns-787d4945fb-9tzhl\" (UID: \"e66dc7c5-57d2-448e-bb82-49a32dea8397\") " pod="kube-system/coredns-787d4945fb-9tzhl" Feb 12 21:54:00.816216 kubelet[2252]: I0212 21:54:00.816202 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-597nd\" (UniqueName: \"kubernetes.io/projected/b242c47f-973b-4b76-b9b2-2c0c17a5bca5-kube-api-access-597nd\") pod \"coredns-787d4945fb-cztmc\" (UID: \"b242c47f-973b-4b76-b9b2-2c0c17a5bca5\") " pod="kube-system/coredns-787d4945fb-cztmc" Feb 12 21:54:00.816260 kubelet[2252]: I0212 21:54:00.816253 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrt7x\" (UniqueName: \"kubernetes.io/projected/e66dc7c5-57d2-448e-bb82-49a32dea8397-kube-api-access-hrt7x\") pod \"coredns-787d4945fb-9tzhl\" (UID: \"e66dc7c5-57d2-448e-bb82-49a32dea8397\") " pod="kube-system/coredns-787d4945fb-9tzhl" Feb 12 21:54:00.816290 kubelet[2252]: I0212 21:54:00.816266 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b242c47f-973b-4b76-b9b2-2c0c17a5bca5-config-volume\") pod \"coredns-787d4945fb-cztmc\" (UID: \"b242c47f-973b-4b76-b9b2-2c0c17a5bca5\") " pod="kube-system/coredns-787d4945fb-cztmc" Feb 12 21:54:00.932618 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 12 21:54:01.224128 env[1242]: time="2024-02-12T21:54:01.224098086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-9tzhl,Uid:e66dc7c5-57d2-448e-bb82-49a32dea8397,Namespace:kube-system,Attempt:0,}" Feb 12 21:54:01.224667 env[1242]: time="2024-02-12T21:54:01.224651919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-cztmc,Uid:b242c47f-973b-4b76-b9b2-2c0c17a5bca5,Namespace:kube-system,Attempt:0,}" Feb 12 21:54:02.596613 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 21:54:02.596643 systemd-networkd[1114]: cilium_host: Link UP Feb 12 21:54:02.596716 systemd-networkd[1114]: cilium_net: Link UP Feb 12 21:54:02.596718 systemd-networkd[1114]: cilium_net: Gained carrier Feb 12 21:54:02.596797 systemd-networkd[1114]: cilium_host: Gained carrier Feb 12 21:54:02.596882 systemd-networkd[1114]: cilium_host: Gained IPv6LL Feb 12 21:54:02.682694 systemd-networkd[1114]: cilium_net: Gained IPv6LL Feb 12 21:54:02.851044 systemd-networkd[1114]: cilium_vxlan: Link UP Feb 12 21:54:02.851049 systemd-networkd[1114]: cilium_vxlan: Gained carrier Feb 12 21:54:03.825652 kernel: NET: Registered PF_ALG protocol family Feb 12 21:54:04.240536 systemd-networkd[1114]: lxc_health: Link UP Feb 12 21:54:04.245313 systemd-networkd[1114]: lxc_health: Gained carrier Feb 12 21:54:04.245654 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 21:54:04.339120 systemd-networkd[1114]: cilium_vxlan: Gained IPv6LL Feb 12 21:54:04.477969 systemd-networkd[1114]: lxc87ea8e912e64: Link UP Feb 12 21:54:04.485618 kernel: eth0: renamed from tmpb4c1f Feb 12 21:54:04.491319 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc87ea8e912e64: link becomes ready Feb 12 21:54:04.490311 systemd-networkd[1114]: lxc87ea8e912e64: Gained carrier Feb 12 21:54:04.507271 kernel: eth0: renamed from tmp2b9b7 Feb 12 21:54:04.519792 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc69cc72d2c2fd: link becomes ready Feb 12 21:54:04.496447 systemd-networkd[1114]: lxc69cc72d2c2fd: Link UP Feb 12 21:54:04.509409 systemd-networkd[1114]: lxc69cc72d2c2fd: Gained carrier Feb 12 21:54:04.771627 kubelet[2252]: I0212 21:54:04.770102 2252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-pbq6f" podStartSLOduration=-9.223372020084711e+09 pod.CreationTimestamp="2024-02-12 21:53:48 +0000 UTC" firstStartedPulling="2024-02-12 21:53:49.046315051 +0000 UTC m=+14.863317200" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:54:01.385427223 +0000 UTC m=+27.202429380" watchObservedRunningTime="2024-02-12 21:54:04.770063807 +0000 UTC m=+30.587065959" Feb 12 21:54:05.682721 systemd-networkd[1114]: lxc_health: Gained IPv6LL Feb 12 21:54:06.002709 systemd-networkd[1114]: lxc87ea8e912e64: Gained IPv6LL Feb 12 21:54:06.130764 systemd-networkd[1114]: lxc69cc72d2c2fd: Gained IPv6LL Feb 12 21:54:07.101200 env[1242]: time="2024-02-12T21:54:07.098806480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:54:07.101200 env[1242]: time="2024-02-12T21:54:07.098836180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:54:07.101200 env[1242]: time="2024-02-12T21:54:07.098842962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:54:07.101200 env[1242]: time="2024-02-12T21:54:07.098921489Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4c1ff0cd0281ddaf6c7f93cf6bcf7bd066fefc5550b42114217f9e523f97399 pid=3427 runtime=io.containerd.runc.v2 Feb 12 21:54:07.114969 systemd[1]: run-containerd-runc-k8s.io-b4c1ff0cd0281ddaf6c7f93cf6bcf7bd066fefc5550b42114217f9e523f97399-runc.XcDVOA.mount: Deactivated successfully. Feb 12 21:54:07.126040 env[1242]: time="2024-02-12T21:54:07.125156289Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:54:07.126040 env[1242]: time="2024-02-12T21:54:07.125181130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:54:07.126040 env[1242]: time="2024-02-12T21:54:07.125187615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:54:07.126040 env[1242]: time="2024-02-12T21:54:07.125274597Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2b9b70173baf3a2ddbefe69bc2b8cddbd9344ac6689252eca52b0134c6ce5d3a pid=3453 runtime=io.containerd.runc.v2 Feb 12 21:54:07.145925 systemd-resolved[1172]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 21:54:07.165936 systemd-resolved[1172]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 21:54:07.178934 env[1242]: time="2024-02-12T21:54:07.178907396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-cztmc,Uid:b242c47f-973b-4b76-b9b2-2c0c17a5bca5,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4c1ff0cd0281ddaf6c7f93cf6bcf7bd066fefc5550b42114217f9e523f97399\"" Feb 12 21:54:07.181753 env[1242]: time="2024-02-12T21:54:07.181735898Z" level=info msg="CreateContainer within sandbox \"b4c1ff0cd0281ddaf6c7f93cf6bcf7bd066fefc5550b42114217f9e523f97399\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 21:54:07.199414 env[1242]: time="2024-02-12T21:54:07.199299430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-9tzhl,Uid:e66dc7c5-57d2-448e-bb82-49a32dea8397,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b9b70173baf3a2ddbefe69bc2b8cddbd9344ac6689252eca52b0134c6ce5d3a\"" Feb 12 21:54:07.200881 env[1242]: time="2024-02-12T21:54:07.200866482Z" level=info msg="CreateContainer within sandbox \"2b9b70173baf3a2ddbefe69bc2b8cddbd9344ac6689252eca52b0134c6ce5d3a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 21:54:07.408808 kubelet[2252]: I0212 21:54:07.386105 2252 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 12 21:54:07.414628 env[1242]: time="2024-02-12T21:54:07.414582529Z" level=info msg="CreateContainer within sandbox \"2b9b70173baf3a2ddbefe69bc2b8cddbd9344ac6689252eca52b0134c6ce5d3a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8f0ef16d591e4b5c0157af6892ee854b607c9ea0ac70e31563cd64fed06d06eb\"" Feb 12 21:54:07.421740 env[1242]: time="2024-02-12T21:54:07.421724889Z" level=info msg="StartContainer for \"8f0ef16d591e4b5c0157af6892ee854b607c9ea0ac70e31563cd64fed06d06eb\"" Feb 12 21:54:07.436334 env[1242]: time="2024-02-12T21:54:07.436304377Z" level=info msg="CreateContainer within sandbox \"b4c1ff0cd0281ddaf6c7f93cf6bcf7bd066fefc5550b42114217f9e523f97399\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"169408d802613158d95e782636be10d550775d1a2b9a2889dd46f64c5b930a64\"" Feb 12 21:54:07.436841 env[1242]: time="2024-02-12T21:54:07.436827777Z" level=info msg="StartContainer for \"169408d802613158d95e782636be10d550775d1a2b9a2889dd46f64c5b930a64\"" Feb 12 21:54:07.472146 env[1242]: time="2024-02-12T21:54:07.472117533Z" level=info msg="StartContainer for \"8f0ef16d591e4b5c0157af6892ee854b607c9ea0ac70e31563cd64fed06d06eb\" returns successfully" Feb 12 21:54:07.483438 env[1242]: time="2024-02-12T21:54:07.483411115Z" level=info msg="StartContainer for \"169408d802613158d95e782636be10d550775d1a2b9a2889dd46f64c5b930a64\" returns successfully" Feb 12 21:54:08.390238 kubelet[2252]: I0212 21:54:08.390222 2252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-cztmc" podStartSLOduration=20.390188613 pod.CreationTimestamp="2024-02-12 21:53:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:54:08.389918329 +0000 UTC m=+34.206920486" watchObservedRunningTime="2024-02-12 21:54:08.390188613 +0000 UTC m=+34.207190765" Feb 12 21:54:08.419473 kubelet[2252]: I0212 21:54:08.419454 2252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-9tzhl" podStartSLOduration=20.419420005 pod.CreationTimestamp="2024-02-12 21:53:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:54:08.412104275 +0000 UTC m=+34.229106432" watchObservedRunningTime="2024-02-12 21:54:08.419420005 +0000 UTC m=+34.236422164" Feb 12 21:55:05.386694 systemd[1]: Started sshd@5-139.178.70.99:22-139.178.89.65:39306.service. Feb 12 21:55:05.472291 sshd[3637]: Accepted publickey for core from 139.178.89.65 port 39306 ssh2: RSA SHA256:HiqmCZ5wMmSvO0wWrhK3vjnBlpa7aAHv9/SVtM7jhV0 Feb 12 21:55:05.481540 sshd[3637]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:55:05.491779 systemd[1]: Started session-8.scope. Feb 12 21:55:05.492025 systemd-logind[1224]: New session 8 of user core. Feb 12 21:55:05.701030 sshd[3637]: pam_unix(sshd:session): session closed for user core Feb 12 21:55:05.703475 systemd[1]: sshd@5-139.178.70.99:22-139.178.89.65:39306.service: Deactivated successfully. Feb 12 21:55:05.704379 systemd[1]: session-8.scope: Deactivated successfully. Feb 12 21:55:05.704650 systemd-logind[1224]: Session 8 logged out. Waiting for processes to exit. Feb 12 21:55:05.705255 systemd-logind[1224]: Removed session 8. Feb 12 21:55:10.704323 systemd[1]: Started sshd@6-139.178.70.99:22-139.178.89.65:57828.service. Feb 12 21:55:10.754135 sshd[3651]: Accepted publickey for core from 139.178.89.65 port 57828 ssh2: RSA SHA256:HiqmCZ5wMmSvO0wWrhK3vjnBlpa7aAHv9/SVtM7jhV0 Feb 12 21:55:10.755227 sshd[3651]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:55:10.757840 systemd-logind[1224]: New session 9 of user core. Feb 12 21:55:10.758228 systemd[1]: Started session-9.scope. Feb 12 21:55:10.876076 sshd[3651]: pam_unix(sshd:session): session closed for user core Feb 12 21:55:10.877759 systemd[1]: sshd@6-139.178.70.99:22-139.178.89.65:57828.service: Deactivated successfully. Feb 12 21:55:10.878460 systemd[1]: session-9.scope: Deactivated successfully. Feb 12 21:55:10.878477 systemd-logind[1224]: Session 9 logged out. Waiting for processes to exit. Feb 12 21:55:10.879202 systemd-logind[1224]: Removed session 9. Feb 12 21:55:15.879801 systemd[1]: Started sshd@7-139.178.70.99:22-139.178.89.65:57834.service. Feb 12 21:55:15.912590 sshd[3664]: Accepted publickey for core from 139.178.89.65 port 57834 ssh2: RSA SHA256:HiqmCZ5wMmSvO0wWrhK3vjnBlpa7aAHv9/SVtM7jhV0 Feb 12 21:55:15.914091 sshd[3664]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:55:15.918114 systemd[1]: Started session-10.scope. Feb 12 21:55:15.918652 systemd-logind[1224]: New session 10 of user core. Feb 12 21:55:16.087216 sshd[3664]: pam_unix(sshd:session): session closed for user core Feb 12 21:55:16.161511 systemd[1]: sshd@7-139.178.70.99:22-139.178.89.65:57834.service: Deactivated successfully. Feb 12 21:55:16.162490 systemd[1]: session-10.scope: Deactivated successfully. Feb 12 21:55:16.162842 systemd-logind[1224]: Session 10 logged out. Waiting for processes to exit. Feb 12 21:55:16.163537 systemd-logind[1224]: Removed session 10. Feb 12 21:55:21.090521 systemd[1]: Started sshd@8-139.178.70.99:22-139.178.89.65:46966.service. Feb 12 21:55:21.127090 sshd[3680]: Accepted publickey for core from 139.178.89.65 port 46966 ssh2: RSA SHA256:HiqmCZ5wMmSvO0wWrhK3vjnBlpa7aAHv9/SVtM7jhV0 Feb 12 21:55:21.128540 sshd[3680]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:55:21.138507 systemd-logind[1224]: New session 11 of user core. Feb 12 21:55:21.138864 systemd[1]: Started session-11.scope. Feb 12 21:55:21.241685 sshd[3680]: pam_unix(sshd:session): session closed for user core Feb 12 21:55:21.242029 systemd[1]: Started sshd@9-139.178.70.99:22-139.178.89.65:46974.service. Feb 12 21:55:21.245377 systemd[1]: sshd@8-139.178.70.99:22-139.178.89.65:46966.service: Deactivated successfully. Feb 12 21:55:21.246422 systemd[1]: session-11.scope: Deactivated successfully. Feb 12 21:55:21.247213 systemd-logind[1224]: Session 11 logged out. Waiting for processes to exit. Feb 12 21:55:21.247998 systemd-logind[1224]: Removed session 11. Feb 12 21:55:21.273832 sshd[3692]: Accepted publickey for core from 139.178.89.65 port 46974 ssh2: RSA SHA256:HiqmCZ5wMmSvO0wWrhK3vjnBlpa7aAHv9/SVtM7jhV0 Feb 12 21:55:21.274658 sshd[3692]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:55:21.277193 systemd-logind[1224]: New session 12 of user core. Feb 12 21:55:21.277467 systemd[1]: Started session-12.scope. Feb 12 21:55:21.982844 systemd[1]: Started sshd@10-139.178.70.99:22-139.178.89.65:46976.service. Feb 12 21:55:21.988675 sshd[3692]: pam_unix(sshd:session): session closed for user core Feb 12 21:55:21.999753 systemd[1]: sshd@9-139.178.70.99:22-139.178.89.65:46974.service: Deactivated successfully. Feb 12 21:55:22.001217 systemd[1]: session-12.scope: Deactivated successfully. Feb 12 21:55:22.001241 systemd-logind[1224]: Session 12 logged out. Waiting for processes to exit. Feb 12 21:55:22.001843 systemd-logind[1224]: Removed session 12. Feb 12 21:55:22.034557 sshd[3703]: Accepted publickey for core from 139.178.89.65 port 46976 ssh2: RSA SHA256:HiqmCZ5wMmSvO0wWrhK3vjnBlpa7aAHv9/SVtM7jhV0 Feb 12 21:55:22.035365 sshd[3703]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:55:22.038640 systemd-logind[1224]: New session 13 of user core. Feb 12 21:55:22.039256 systemd[1]: Started session-13.scope. Feb 12 21:55:22.158800 sshd[3703]: pam_unix(sshd:session): session closed for user core Feb 12 21:55:22.160776 systemd[1]: sshd@10-139.178.70.99:22-139.178.89.65:46976.service: Deactivated successfully. Feb 12 21:55:22.160953 systemd-logind[1224]: Session 13 logged out. Waiting for processes to exit. Feb 12 21:55:22.161251 systemd[1]: session-13.scope: Deactivated successfully. Feb 12 21:55:22.162038 systemd-logind[1224]: Removed session 13. Feb 12 21:55:27.162314 systemd[1]: Started sshd@11-139.178.70.99:22-139.178.89.65:46982.service. Feb 12 21:55:27.191572 sshd[3719]: Accepted publickey for core from 139.178.89.65 port 46982 ssh2: RSA SHA256:HiqmCZ5wMmSvO0wWrhK3vjnBlpa7aAHv9/SVtM7jhV0 Feb 12 21:55:27.192485 sshd[3719]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:55:27.195289 systemd[1]: Started session-14.scope. Feb 12 21:55:27.196003 systemd-logind[1224]: New session 14 of user core. Feb 12 21:55:27.330340 sshd[3719]: pam_unix(sshd:session): session closed for user core Feb 12 21:55:27.332046 systemd[1]: sshd@11-139.178.70.99:22-139.178.89.65:46982.service: Deactivated successfully. Feb 12 21:55:27.332904 systemd[1]: session-14.scope: Deactivated successfully. Feb 12 21:55:27.333304 systemd-logind[1224]: Session 14 logged out. Waiting for processes to exit. Feb 12 21:55:27.333939 systemd-logind[1224]: Removed session 14. Feb 12 21:55:32.333231 systemd[1]: Started sshd@12-139.178.70.99:22-139.178.89.65:58122.service. Feb 12 21:55:32.362677 sshd[3732]: Accepted publickey for core from 139.178.89.65 port 58122 ssh2: RSA SHA256:HiqmCZ5wMmSvO0wWrhK3vjnBlpa7aAHv9/SVtM7jhV0 Feb 12 21:55:32.363851 sshd[3732]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:55:32.366454 systemd-logind[1224]: New session 15 of user core. Feb 12 21:55:32.366780 systemd[1]: Started session-15.scope. Feb 12 21:55:32.458175 sshd[3732]: pam_unix(sshd:session): session closed for user core Feb 12 21:55:32.459984 systemd[1]: Started sshd@13-139.178.70.99:22-139.178.89.65:58136.service. Feb 12 21:55:32.462668 systemd-logind[1224]: Session 15 logged out. Waiting for processes to exit. Feb 12 21:55:32.463544 systemd[1]: sshd@12-139.178.70.99:22-139.178.89.65:58122.service: Deactivated successfully. Feb 12 21:55:32.464020 systemd[1]: session-15.scope: Deactivated successfully. Feb 12 21:55:32.464999 systemd-logind[1224]: Removed session 15. Feb 12 21:55:32.488904 sshd[3743]: Accepted publickey for core from 139.178.89.65 port 58136 ssh2: RSA SHA256:HiqmCZ5wMmSvO0wWrhK3vjnBlpa7aAHv9/SVtM7jhV0 Feb 12 21:55:32.489685 sshd[3743]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:55:32.492586 systemd[1]: Started session-16.scope. Feb 12 21:55:32.492993 systemd-logind[1224]: New session 16 of user core. Feb 12 21:55:32.897049 sshd[3743]: pam_unix(sshd:session): session closed for user core Feb 12 21:55:32.897953 systemd[1]: Started sshd@14-139.178.70.99:22-139.178.89.65:58144.service. Feb 12 21:55:32.901068 systemd[1]: sshd@13-139.178.70.99:22-139.178.89.65:58136.service: Deactivated successfully. Feb 12 21:55:32.902109 systemd[1]: session-16.scope: Deactivated successfully. Feb 12 21:55:32.902430 systemd-logind[1224]: Session 16 logged out. Waiting for processes to exit. Feb 12 21:55:32.902951 systemd-logind[1224]: Removed session 16. Feb 12 21:55:32.974521 sshd[3754]: Accepted publickey for core from 139.178.89.65 port 58144 ssh2: RSA SHA256:HiqmCZ5wMmSvO0wWrhK3vjnBlpa7aAHv9/SVtM7jhV0 Feb 12 21:55:32.975533 sshd[3754]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:55:32.979572 systemd[1]: Started session-17.scope. Feb 12 21:55:32.979963 systemd-logind[1224]: New session 17 of user core. Feb 12 21:55:33.980858 sshd[3754]: pam_unix(sshd:session): session closed for user core Feb 12 21:55:33.982589 systemd[1]: Started sshd@15-139.178.70.99:22-139.178.89.65:58154.service. Feb 12 21:55:33.991928 systemd[1]: sshd@14-139.178.70.99:22-139.178.89.65:58144.service: Deactivated successfully. Feb 12 21:55:33.992672 systemd[1]: session-17.scope: Deactivated successfully. Feb 12 21:55:33.992886 systemd-logind[1224]: Session 17 logged out. Waiting for processes to exit. Feb 12 21:55:33.993354 systemd-logind[1224]: Removed session 17. Feb 12 21:55:34.023176 sshd[3774]: Accepted publickey for core from 139.178.89.65 port 58154 ssh2: RSA SHA256:HiqmCZ5wMmSvO0wWrhK3vjnBlpa7aAHv9/SVtM7jhV0 Feb 12 21:55:34.024258 sshd[3774]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:55:34.027294 systemd[1]: Started session-18.scope. Feb 12 21:55:34.027424 systemd-logind[1224]: New session 18 of user core. Feb 12 21:55:34.255009 sshd[3774]: pam_unix(sshd:session): session closed for user core Feb 12 21:55:34.256391 systemd[1]: Started sshd@16-139.178.70.99:22-139.178.89.65:58164.service. Feb 12 21:55:34.259836 systemd[1]: sshd@15-139.178.70.99:22-139.178.89.65:58154.service: Deactivated successfully. Feb 12 21:55:34.260719 systemd[1]: session-18.scope: Deactivated successfully. Feb 12 21:55:34.261057 systemd-logind[1224]: Session 18 logged out. Waiting for processes to exit. Feb 12 21:55:34.261572 systemd-logind[1224]: Removed session 18. Feb 12 21:55:34.291474 sshd[3831]: Accepted publickey for core from 139.178.89.65 port 58164 ssh2: RSA SHA256:HiqmCZ5wMmSvO0wWrhK3vjnBlpa7aAHv9/SVtM7jhV0 Feb 12 21:55:34.292468 sshd[3831]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:55:34.295056 systemd-logind[1224]: New session 19 of user core. Feb 12 21:55:34.295395 systemd[1]: Started session-19.scope. Feb 12 21:55:34.402488 sshd[3831]: pam_unix(sshd:session): session closed for user core Feb 12 21:55:34.405527 systemd-logind[1224]: Session 19 logged out. Waiting for processes to exit. Feb 12 21:55:34.405574 systemd[1]: sshd@16-139.178.70.99:22-139.178.89.65:58164.service: Deactivated successfully. Feb 12 21:55:34.406072 systemd[1]: session-19.scope: Deactivated successfully. Feb 12 21:55:34.406461 systemd-logind[1224]: Removed session 19. Feb 12 21:55:39.405128 systemd[1]: Started sshd@17-139.178.70.99:22-139.178.89.65:40286.service. Feb 12 21:55:39.434892 sshd[3847]: Accepted publickey for core from 139.178.89.65 port 40286 ssh2: RSA SHA256:HiqmCZ5wMmSvO0wWrhK3vjnBlpa7aAHv9/SVtM7jhV0 Feb 12 21:55:39.436204 sshd[3847]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:55:39.440051 systemd[1]: Started session-20.scope. Feb 12 21:55:39.440282 systemd-logind[1224]: New session 20 of user core. Feb 12 21:55:39.538228 sshd[3847]: pam_unix(sshd:session): session closed for user core Feb 12 21:55:39.539893 systemd[1]: sshd@17-139.178.70.99:22-139.178.89.65:40286.service: Deactivated successfully. Feb 12 21:55:39.540766 systemd[1]: session-20.scope: Deactivated successfully. Feb 12 21:55:39.541100 systemd-logind[1224]: Session 20 logged out. Waiting for processes to exit. Feb 12 21:55:39.541681 systemd-logind[1224]: Removed session 20. Feb 12 21:55:39.945662 systemd[1]: Started sshd@18-139.178.70.99:22-36.112.156.46:60801.service. Feb 12 21:55:44.540646 systemd[1]: Started sshd@19-139.178.70.99:22-139.178.89.65:40288.service. Feb 12 21:55:44.569230 sshd[3889]: Accepted publickey for core from 139.178.89.65 port 40288 ssh2: RSA SHA256:HiqmCZ5wMmSvO0wWrhK3vjnBlpa7aAHv9/SVtM7jhV0 Feb 12 21:55:44.570503 sshd[3889]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:55:44.573978 systemd[1]: Started session-21.scope. Feb 12 21:55:44.574902 systemd-logind[1224]: New session 21 of user core. Feb 12 21:55:44.663915 sshd[3889]: pam_unix(sshd:session): session closed for user core Feb 12 21:55:44.665704 systemd[1]: sshd@19-139.178.70.99:22-139.178.89.65:40288.service: Deactivated successfully. Feb 12 21:55:44.666576 systemd[1]: session-21.scope: Deactivated successfully. Feb 12 21:55:44.666983 systemd-logind[1224]: Session 21 logged out. Waiting for processes to exit. Feb 12 21:55:44.667550 systemd-logind[1224]: Removed session 21. Feb 12 21:55:49.666648 systemd[1]: Started sshd@20-139.178.70.99:22-139.178.89.65:50278.service. Feb 12 21:55:49.695958 sshd[3902]: Accepted publickey for core from 139.178.89.65 port 50278 ssh2: RSA SHA256:HiqmCZ5wMmSvO0wWrhK3vjnBlpa7aAHv9/SVtM7jhV0 Feb 12 21:55:49.697139 sshd[3902]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:55:49.700219 systemd[1]: Started session-22.scope. Feb 12 21:55:49.700527 systemd-logind[1224]: New session 22 of user core. Feb 12 21:55:49.822086 sshd[3902]: pam_unix(sshd:session): session closed for user core Feb 12 21:55:49.823720 systemd[1]: sshd@20-139.178.70.99:22-139.178.89.65:50278.service: Deactivated successfully. Feb 12 21:55:49.824522 systemd[1]: session-22.scope: Deactivated successfully. Feb 12 21:55:49.824846 systemd-logind[1224]: Session 22 logged out. Waiting for processes to exit. Feb 12 21:55:49.825317 systemd-logind[1224]: Removed session 22. Feb 12 21:55:54.825068 systemd[1]: Started sshd@21-139.178.70.99:22-139.178.89.65:50286.service. Feb 12 21:55:54.853963 sshd[3918]: Accepted publickey for core from 139.178.89.65 port 50286 ssh2: RSA SHA256:HiqmCZ5wMmSvO0wWrhK3vjnBlpa7aAHv9/SVtM7jhV0 Feb 12 21:55:54.854997 sshd[3918]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:55:54.857935 systemd[1]: Started session-23.scope. Feb 12 21:55:54.858050 systemd-logind[1224]: New session 23 of user core. Feb 12 21:55:54.948556 sshd[3918]: pam_unix(sshd:session): session closed for user core Feb 12 21:55:54.950279 systemd[1]: Started sshd@22-139.178.70.99:22-139.178.89.65:50302.service. Feb 12 21:55:54.954742 systemd[1]: sshd@21-139.178.70.99:22-139.178.89.65:50286.service: Deactivated successfully. Feb 12 21:55:54.955223 systemd[1]: session-23.scope: Deactivated successfully. Feb 12 21:55:54.955492 systemd-logind[1224]: Session 23 logged out. Waiting for processes to exit. Feb 12 21:55:54.956085 systemd-logind[1224]: Removed session 23. Feb 12 21:55:54.981036 sshd[3929]: Accepted publickey for core from 139.178.89.65 port 50302 ssh2: RSA SHA256:HiqmCZ5wMmSvO0wWrhK3vjnBlpa7aAHv9/SVtM7jhV0 Feb 12 21:55:54.982159 sshd[3929]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:55:54.985250 systemd[1]: Started session-24.scope. Feb 12 21:55:54.985383 systemd-logind[1224]: New session 24 of user core. Feb 12 21:55:56.930392 env[1242]: time="2024-02-12T21:55:56.928986703Z" level=info msg="StopContainer for \"a80f123eff9bd7aadfeba02fdce06fd44a86f0f26dbdab0194f22d2e9cbe827b\" with timeout 30 (s)" Feb 12 21:55:56.930392 env[1242]: time="2024-02-12T21:55:56.929577756Z" level=info msg="Stop container \"a80f123eff9bd7aadfeba02fdce06fd44a86f0f26dbdab0194f22d2e9cbe827b\" with signal terminated" Feb 12 21:55:56.958719 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a80f123eff9bd7aadfeba02fdce06fd44a86f0f26dbdab0194f22d2e9cbe827b-rootfs.mount: Deactivated successfully. Feb 12 21:55:56.979656 env[1242]: time="2024-02-12T21:55:56.979610459Z" level=info msg="shim disconnected" id=a80f123eff9bd7aadfeba02fdce06fd44a86f0f26dbdab0194f22d2e9cbe827b Feb 12 21:55:56.979927 env[1242]: time="2024-02-12T21:55:56.979903984Z" level=warning msg="cleaning up after shim disconnected" id=a80f123eff9bd7aadfeba02fdce06fd44a86f0f26dbdab0194f22d2e9cbe827b namespace=k8s.io Feb 12 21:55:56.979987 env[1242]: time="2024-02-12T21:55:56.979977822Z" level=info msg="cleaning up dead shim" Feb 12 21:55:56.986948 env[1242]: time="2024-02-12T21:55:56.985532625Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 21:55:56.987293 env[1242]: time="2024-02-12T21:55:56.987275455Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:55:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3977 runtime=io.containerd.runc.v2\n" Feb 12 21:55:56.988236 env[1242]: time="2024-02-12T21:55:56.988220987Z" level=info msg="StopContainer for \"a80f123eff9bd7aadfeba02fdce06fd44a86f0f26dbdab0194f22d2e9cbe827b\" returns successfully" Feb 12 21:55:56.988828 env[1242]: time="2024-02-12T21:55:56.988815300Z" level=info msg="StopContainer for \"d98856b9cd3f47e8bd6563143ba32226d9446770b45d996e5f23001e16088186\" with timeout 1 (s)" Feb 12 21:55:56.988974 env[1242]: time="2024-02-12T21:55:56.988953921Z" level=info msg="StopPodSandbox for \"f6b7d50db6ae420ed8467d42a378ce8ae4eeffb80e9b3b3c78ae81964cbfdc94\"" Feb 12 21:55:56.989027 env[1242]: time="2024-02-12T21:55:56.989004376Z" level=info msg="Container to stop \"a80f123eff9bd7aadfeba02fdce06fd44a86f0f26dbdab0194f22d2e9cbe827b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 21:55:56.990676 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f6b7d50db6ae420ed8467d42a378ce8ae4eeffb80e9b3b3c78ae81964cbfdc94-shm.mount: Deactivated successfully. Feb 12 21:55:56.992971 env[1242]: time="2024-02-12T21:55:56.992950972Z" level=info msg="Stop container \"d98856b9cd3f47e8bd6563143ba32226d9446770b45d996e5f23001e16088186\" with signal terminated" Feb 12 21:55:57.004154 systemd-networkd[1114]: lxc_health: Link DOWN Feb 12 21:55:57.004160 systemd-networkd[1114]: lxc_health: Lost carrier Feb 12 21:55:57.017880 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6b7d50db6ae420ed8467d42a378ce8ae4eeffb80e9b3b3c78ae81964cbfdc94-rootfs.mount: Deactivated successfully. Feb 12 21:55:57.085825 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d98856b9cd3f47e8bd6563143ba32226d9446770b45d996e5f23001e16088186-rootfs.mount: Deactivated successfully. Feb 12 21:55:57.112557 env[1242]: time="2024-02-12T21:55:57.112502339Z" level=info msg="shim disconnected" id=f6b7d50db6ae420ed8467d42a378ce8ae4eeffb80e9b3b3c78ae81964cbfdc94 Feb 12 21:55:57.117330 env[1242]: time="2024-02-12T21:55:57.112694401Z" level=warning msg="cleaning up after shim disconnected" id=f6b7d50db6ae420ed8467d42a378ce8ae4eeffb80e9b3b3c78ae81964cbfdc94 namespace=k8s.io Feb 12 21:55:57.117330 env[1242]: time="2024-02-12T21:55:57.112706882Z" level=info msg="cleaning up dead shim" Feb 12 21:55:57.119049 env[1242]: time="2024-02-12T21:55:57.119001545Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:55:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4035 runtime=io.containerd.runc.v2\n" Feb 12 21:55:57.121213 env[1242]: time="2024-02-12T21:55:57.121186638Z" level=info msg="TearDown network for sandbox \"f6b7d50db6ae420ed8467d42a378ce8ae4eeffb80e9b3b3c78ae81964cbfdc94\" successfully" Feb 12 21:55:57.126524 env[1242]: time="2024-02-12T21:55:57.121210083Z" level=info msg="StopPodSandbox for \"f6b7d50db6ae420ed8467d42a378ce8ae4eeffb80e9b3b3c78ae81964cbfdc94\" returns successfully" Feb 12 21:55:57.132995 env[1242]: time="2024-02-12T21:55:57.132958761Z" level=info msg="shim disconnected" id=d98856b9cd3f47e8bd6563143ba32226d9446770b45d996e5f23001e16088186 Feb 12 21:55:57.133123 env[1242]: time="2024-02-12T21:55:57.133108958Z" level=warning msg="cleaning up after shim disconnected" id=d98856b9cd3f47e8bd6563143ba32226d9446770b45d996e5f23001e16088186 namespace=k8s.io Feb 12 21:55:57.133204 env[1242]: time="2024-02-12T21:55:57.133191207Z" level=info msg="cleaning up dead shim" Feb 12 21:55:57.139367 env[1242]: time="2024-02-12T21:55:57.139337080Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:55:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4049 runtime=io.containerd.runc.v2\n" Feb 12 21:55:57.154705 env[1242]: time="2024-02-12T21:55:57.154652517Z" level=info msg="StopContainer for \"d98856b9cd3f47e8bd6563143ba32226d9446770b45d996e5f23001e16088186\" returns successfully" Feb 12 21:55:57.155850 env[1242]: time="2024-02-12T21:55:57.155832752Z" level=info msg="StopPodSandbox for \"9a67c68620bfee1697c23b602f1d44844d74f82aee7fb2284f2a11cc6395d54b\"" Feb 12 21:55:57.156011 env[1242]: time="2024-02-12T21:55:57.155995061Z" level=info msg="Container to stop \"9e26e92ac4faebdc74dd0071b5875eac50a8f0182db9d5a6be477adacf62e8d7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 21:55:57.159595 env[1242]: time="2024-02-12T21:55:57.156087770Z" level=info msg="Container to stop \"038e1e904823c7730205675b3273917cc3a948cd1e7cc692ce16c3ef4a6da2b1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 21:55:57.159595 env[1242]: time="2024-02-12T21:55:57.156102716Z" level=info msg="Container to stop \"4e794a24e40de42cfa60fc05d30d226dd8f74a43c95b80f34b2ff42d4f0a5134\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 21:55:57.159595 env[1242]: time="2024-02-12T21:55:57.156112715Z" level=info msg="Container to stop \"ff7d7116f7c7d7343ce74c5d29f6920a6ddf8c834ff0bdcc5435d62495a11b51\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 21:55:57.159595 env[1242]: time="2024-02-12T21:55:57.156120392Z" level=info msg="Container to stop \"d98856b9cd3f47e8bd6563143ba32226d9446770b45d996e5f23001e16088186\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 21:55:57.195354 env[1242]: time="2024-02-12T21:55:57.194669260Z" level=info msg="shim disconnected" id=9a67c68620bfee1697c23b602f1d44844d74f82aee7fb2284f2a11cc6395d54b Feb 12 21:55:57.196000 env[1242]: time="2024-02-12T21:55:57.195987032Z" level=warning msg="cleaning up after shim disconnected" id=9a67c68620bfee1697c23b602f1d44844d74f82aee7fb2284f2a11cc6395d54b namespace=k8s.io Feb 12 21:55:57.196059 env[1242]: time="2024-02-12T21:55:57.196049103Z" level=info msg="cleaning up dead shim" Feb 12 21:55:57.200769 env[1242]: time="2024-02-12T21:55:57.200739995Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:55:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4081 runtime=io.containerd.runc.v2\n" Feb 12 21:55:57.207556 env[1242]: time="2024-02-12T21:55:57.207526586Z" level=info msg="TearDown network for sandbox \"9a67c68620bfee1697c23b602f1d44844d74f82aee7fb2284f2a11cc6395d54b\" successfully" Feb 12 21:55:57.207653 env[1242]: time="2024-02-12T21:55:57.207553385Z" level=info msg="StopPodSandbox for \"9a67c68620bfee1697c23b602f1d44844d74f82aee7fb2284f2a11cc6395d54b\" returns successfully" Feb 12 21:55:57.236737 kubelet[2252]: I0212 21:55:57.236703 2252 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b5ccdbcd-fe44-4418-a186-a9a9a534702c-cilium-config-path\") pod \"b5ccdbcd-fe44-4418-a186-a9a9a534702c\" (UID: \"b5ccdbcd-fe44-4418-a186-a9a9a534702c\") " Feb 12 21:55:57.236737 kubelet[2252]: I0212 21:55:57.236750 2252 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jfhb\" (UniqueName: \"kubernetes.io/projected/b5ccdbcd-fe44-4418-a186-a9a9a534702c-kube-api-access-7jfhb\") pod \"b5ccdbcd-fe44-4418-a186-a9a9a534702c\" (UID: \"b5ccdbcd-fe44-4418-a186-a9a9a534702c\") " Feb 12 21:55:57.249453 kubelet[2252]: W0212 21:55:57.249410 2252 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/b5ccdbcd-fe44-4418-a186-a9a9a534702c/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 21:55:57.262728 kubelet[2252]: I0212 21:55:57.259060 2252 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5ccdbcd-fe44-4418-a186-a9a9a534702c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b5ccdbcd-fe44-4418-a186-a9a9a534702c" (UID: "b5ccdbcd-fe44-4418-a186-a9a9a534702c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 21:55:57.271952 kubelet[2252]: I0212 21:55:57.271908 2252 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5ccdbcd-fe44-4418-a186-a9a9a534702c-kube-api-access-7jfhb" (OuterVolumeSpecName: "kube-api-access-7jfhb") pod "b5ccdbcd-fe44-4418-a186-a9a9a534702c" (UID: "b5ccdbcd-fe44-4418-a186-a9a9a534702c"). InnerVolumeSpecName "kube-api-access-7jfhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 21:55:57.337402 kubelet[2252]: I0212 21:55:57.337369 2252 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-bpf-maps\") pod \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\" (UID: \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\") " Feb 12 21:55:57.337402 kubelet[2252]: I0212 21:55:57.337399 2252 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-cilium-cgroup\") pod \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\" (UID: \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\") " Feb 12 21:55:57.337402 kubelet[2252]: I0212 21:55:57.337410 2252 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-xtables-lock\") pod \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\" (UID: \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\") " Feb 12 21:55:57.337552 kubelet[2252]: I0212 21:55:57.337420 2252 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-hostproc\") pod \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\" (UID: \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\") " Feb 12 21:55:57.337552 kubelet[2252]: I0212 21:55:57.337431 2252 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-etc-cni-netd\") pod \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\" (UID: \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\") " Feb 12 21:55:57.337552 kubelet[2252]: I0212 21:55:57.337448 2252 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mccpp\" (UniqueName: \"kubernetes.io/projected/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-kube-api-access-mccpp\") pod \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\" (UID: \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\") " Feb 12 21:55:57.337552 kubelet[2252]: I0212 21:55:57.337459 2252 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-hubble-tls\") pod \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\" (UID: \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\") " Feb 12 21:55:57.337552 kubelet[2252]: I0212 21:55:57.337469 2252 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-host-proc-sys-net\") pod \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\" (UID: \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\") " Feb 12 21:55:57.337552 kubelet[2252]: I0212 21:55:57.337480 2252 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-host-proc-sys-kernel\") pod \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\" (UID: \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\") " Feb 12 21:55:57.337686 kubelet[2252]: I0212 21:55:57.337492 2252 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-clustermesh-secrets\") pod \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\" (UID: \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\") " Feb 12 21:55:57.337686 kubelet[2252]: I0212 21:55:57.337502 2252 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-lib-modules\") pod \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\" (UID: \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\") " Feb 12 21:55:57.337686 kubelet[2252]: I0212 21:55:57.337514 2252 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-cilium-config-path\") pod \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\" (UID: \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\") " Feb 12 21:55:57.337686 kubelet[2252]: I0212 21:55:57.337526 2252 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-cni-path\") pod \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\" (UID: \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\") " Feb 12 21:55:57.337686 kubelet[2252]: I0212 21:55:57.337536 2252 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-cilium-run\") pod \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\" (UID: \"ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6\") " Feb 12 21:55:57.337686 kubelet[2252]: I0212 21:55:57.337563 2252 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b5ccdbcd-fe44-4418-a186-a9a9a534702c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 12 21:55:57.337827 kubelet[2252]: I0212 21:55:57.337571 2252 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-7jfhb\" (UniqueName: \"kubernetes.io/projected/b5ccdbcd-fe44-4418-a186-a9a9a534702c-kube-api-access-7jfhb\") on node \"localhost\" DevicePath \"\"" Feb 12 21:55:57.337827 kubelet[2252]: I0212 21:55:57.337789 2252 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6" (UID: "ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:55:57.337827 kubelet[2252]: I0212 21:55:57.337816 2252 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6" (UID: "ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:55:57.337827 kubelet[2252]: I0212 21:55:57.337825 2252 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6" (UID: "ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:55:57.340697 kubelet[2252]: I0212 21:55:57.340669 2252 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6" (UID: "ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 21:55:57.340772 kubelet[2252]: I0212 21:55:57.340711 2252 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6" (UID: "ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:55:57.341672 kubelet[2252]: I0212 21:55:57.340799 2252 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6" (UID: "ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 21:55:57.341672 kubelet[2252]: I0212 21:55:57.340816 2252 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6" (UID: "ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:55:57.341672 kubelet[2252]: I0212 21:55:57.340829 2252 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6" (UID: "ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:55:57.341672 kubelet[2252]: I0212 21:55:57.340841 2252 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6" (UID: "ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:55:57.341672 kubelet[2252]: I0212 21:55:57.340854 2252 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-hostproc" (OuterVolumeSpecName: "hostproc") pod "ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6" (UID: "ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:55:57.341786 kubelet[2252]: I0212 21:55:57.340889 2252 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6" (UID: "ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:55:57.341786 kubelet[2252]: W0212 21:55:57.340968 2252 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 21:55:57.343256 kubelet[2252]: I0212 21:55:57.343234 2252 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6" (UID: "ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 21:55:57.348740 kubelet[2252]: I0212 21:55:57.343266 2252 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-cni-path" (OuterVolumeSpecName: "cni-path") pod "ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6" (UID: "ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:55:57.348740 kubelet[2252]: I0212 21:55:57.345587 2252 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-kube-api-access-mccpp" (OuterVolumeSpecName: "kube-api-access-mccpp") pod "ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6" (UID: "ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6"). InnerVolumeSpecName "kube-api-access-mccpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 21:55:57.437854 kubelet[2252]: I0212 21:55:57.437820 2252 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 12 21:55:57.437854 kubelet[2252]: I0212 21:55:57.437844 2252 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 12 21:55:57.437854 kubelet[2252]: I0212 21:55:57.437851 2252 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 12 21:55:57.437854 kubelet[2252]: I0212 21:55:57.437859 2252 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 12 21:55:57.437854 kubelet[2252]: I0212 21:55:57.437864 2252 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 12 21:55:57.438746 kubelet[2252]: I0212 21:55:57.437869 2252 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 12 21:55:57.438746 kubelet[2252]: I0212 21:55:57.437874 2252 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 12 21:55:57.438746 kubelet[2252]: I0212 21:55:57.437879 2252 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 12 21:55:57.438746 kubelet[2252]: I0212 21:55:57.437885 2252 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-mccpp\" (UniqueName: \"kubernetes.io/projected/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-kube-api-access-mccpp\") on node \"localhost\" DevicePath \"\"" Feb 12 21:55:57.438746 kubelet[2252]: I0212 21:55:57.437891 2252 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 12 21:55:57.438746 kubelet[2252]: I0212 21:55:57.437896 2252 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 12 21:55:57.438746 kubelet[2252]: I0212 21:55:57.437901 2252 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 12 21:55:57.438746 kubelet[2252]: I0212 21:55:57.437906 2252 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 12 21:55:57.438926 kubelet[2252]: I0212 21:55:57.437929 2252 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 12 21:55:57.566353 kubelet[2252]: I0212 21:55:57.566336 2252 scope.go:115] "RemoveContainer" containerID="d98856b9cd3f47e8bd6563143ba32226d9446770b45d996e5f23001e16088186" Feb 12 21:55:57.575714 env[1242]: time="2024-02-12T21:55:57.574813576Z" level=info msg="RemoveContainer for \"d98856b9cd3f47e8bd6563143ba32226d9446770b45d996e5f23001e16088186\"" Feb 12 21:55:57.594385 env[1242]: time="2024-02-12T21:55:57.594291690Z" level=info msg="RemoveContainer for \"d98856b9cd3f47e8bd6563143ba32226d9446770b45d996e5f23001e16088186\" returns successfully" Feb 12 21:55:57.612262 kubelet[2252]: I0212 21:55:57.612229 2252 scope.go:115] "RemoveContainer" containerID="038e1e904823c7730205675b3273917cc3a948cd1e7cc692ce16c3ef4a6da2b1" Feb 12 21:55:57.616718 env[1242]: time="2024-02-12T21:55:57.616359901Z" level=info msg="RemoveContainer for \"038e1e904823c7730205675b3273917cc3a948cd1e7cc692ce16c3ef4a6da2b1\"" Feb 12 21:55:57.618293 env[1242]: time="2024-02-12T21:55:57.618218117Z" level=info msg="RemoveContainer for \"038e1e904823c7730205675b3273917cc3a948cd1e7cc692ce16c3ef4a6da2b1\" returns successfully" Feb 12 21:55:57.618458 kubelet[2252]: I0212 21:55:57.618445 2252 scope.go:115] "RemoveContainer" containerID="ff7d7116f7c7d7343ce74c5d29f6920a6ddf8c834ff0bdcc5435d62495a11b51" Feb 12 21:55:57.619396 env[1242]: time="2024-02-12T21:55:57.619124616Z" level=info msg="RemoveContainer for \"ff7d7116f7c7d7343ce74c5d29f6920a6ddf8c834ff0bdcc5435d62495a11b51\"" Feb 12 21:55:57.623236 env[1242]: time="2024-02-12T21:55:57.623204543Z" level=info msg="RemoveContainer for \"ff7d7116f7c7d7343ce74c5d29f6920a6ddf8c834ff0bdcc5435d62495a11b51\" returns successfully" Feb 12 21:55:57.623391 kubelet[2252]: I0212 21:55:57.623373 2252 scope.go:115] "RemoveContainer" containerID="9e26e92ac4faebdc74dd0071b5875eac50a8f0182db9d5a6be477adacf62e8d7" Feb 12 21:55:57.624019 env[1242]: time="2024-02-12T21:55:57.623994200Z" level=info msg="RemoveContainer for \"9e26e92ac4faebdc74dd0071b5875eac50a8f0182db9d5a6be477adacf62e8d7\"" Feb 12 21:55:57.625931 env[1242]: time="2024-02-12T21:55:57.625896098Z" level=info msg="RemoveContainer for \"9e26e92ac4faebdc74dd0071b5875eac50a8f0182db9d5a6be477adacf62e8d7\" returns successfully" Feb 12 21:55:57.626974 kubelet[2252]: I0212 21:55:57.626957 2252 scope.go:115] "RemoveContainer" containerID="4e794a24e40de42cfa60fc05d30d226dd8f74a43c95b80f34b2ff42d4f0a5134" Feb 12 21:55:57.628723 env[1242]: time="2024-02-12T21:55:57.628695666Z" level=info msg="RemoveContainer for \"4e794a24e40de42cfa60fc05d30d226dd8f74a43c95b80f34b2ff42d4f0a5134\"" Feb 12 21:55:57.631084 env[1242]: time="2024-02-12T21:55:57.631053496Z" level=info msg="RemoveContainer for \"4e794a24e40de42cfa60fc05d30d226dd8f74a43c95b80f34b2ff42d4f0a5134\" returns successfully" Feb 12 21:55:57.631373 kubelet[2252]: I0212 21:55:57.631353 2252 scope.go:115] "RemoveContainer" containerID="d98856b9cd3f47e8bd6563143ba32226d9446770b45d996e5f23001e16088186" Feb 12 21:55:57.631574 env[1242]: time="2024-02-12T21:55:57.631515937Z" level=error msg="ContainerStatus for \"d98856b9cd3f47e8bd6563143ba32226d9446770b45d996e5f23001e16088186\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d98856b9cd3f47e8bd6563143ba32226d9446770b45d996e5f23001e16088186\": not found" Feb 12 21:55:57.634921 kubelet[2252]: E0212 21:55:57.634899 2252 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d98856b9cd3f47e8bd6563143ba32226d9446770b45d996e5f23001e16088186\": not found" containerID="d98856b9cd3f47e8bd6563143ba32226d9446770b45d996e5f23001e16088186" Feb 12 21:55:57.635837 kubelet[2252]: I0212 21:55:57.635812 2252 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:d98856b9cd3f47e8bd6563143ba32226d9446770b45d996e5f23001e16088186} err="failed to get container status \"d98856b9cd3f47e8bd6563143ba32226d9446770b45d996e5f23001e16088186\": rpc error: code = NotFound desc = an error occurred when try to find container \"d98856b9cd3f47e8bd6563143ba32226d9446770b45d996e5f23001e16088186\": not found" Feb 12 21:55:57.635837 kubelet[2252]: I0212 21:55:57.635835 2252 scope.go:115] "RemoveContainer" containerID="038e1e904823c7730205675b3273917cc3a948cd1e7cc692ce16c3ef4a6da2b1" Feb 12 21:55:57.636091 env[1242]: time="2024-02-12T21:55:57.636030288Z" level=error msg="ContainerStatus for \"038e1e904823c7730205675b3273917cc3a948cd1e7cc692ce16c3ef4a6da2b1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"038e1e904823c7730205675b3273917cc3a948cd1e7cc692ce16c3ef4a6da2b1\": not found" Feb 12 21:55:57.636198 kubelet[2252]: E0212 21:55:57.636177 2252 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"038e1e904823c7730205675b3273917cc3a948cd1e7cc692ce16c3ef4a6da2b1\": not found" containerID="038e1e904823c7730205675b3273917cc3a948cd1e7cc692ce16c3ef4a6da2b1" Feb 12 21:55:57.636198 kubelet[2252]: I0212 21:55:57.636198 2252 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:038e1e904823c7730205675b3273917cc3a948cd1e7cc692ce16c3ef4a6da2b1} err="failed to get container status \"038e1e904823c7730205675b3273917cc3a948cd1e7cc692ce16c3ef4a6da2b1\": rpc error: code = NotFound desc = an error occurred when try to find container \"038e1e904823c7730205675b3273917cc3a948cd1e7cc692ce16c3ef4a6da2b1\": not found" Feb 12 21:55:57.636298 kubelet[2252]: I0212 21:55:57.636204 2252 scope.go:115] "RemoveContainer" containerID="ff7d7116f7c7d7343ce74c5d29f6920a6ddf8c834ff0bdcc5435d62495a11b51" Feb 12 21:55:57.636356 env[1242]: time="2024-02-12T21:55:57.636294690Z" level=error msg="ContainerStatus for \"ff7d7116f7c7d7343ce74c5d29f6920a6ddf8c834ff0bdcc5435d62495a11b51\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ff7d7116f7c7d7343ce74c5d29f6920a6ddf8c834ff0bdcc5435d62495a11b51\": not found" Feb 12 21:55:57.636413 kubelet[2252]: E0212 21:55:57.636363 2252 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ff7d7116f7c7d7343ce74c5d29f6920a6ddf8c834ff0bdcc5435d62495a11b51\": not found" containerID="ff7d7116f7c7d7343ce74c5d29f6920a6ddf8c834ff0bdcc5435d62495a11b51" Feb 12 21:55:57.636413 kubelet[2252]: I0212 21:55:57.636375 2252 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:ff7d7116f7c7d7343ce74c5d29f6920a6ddf8c834ff0bdcc5435d62495a11b51} err="failed to get container status \"ff7d7116f7c7d7343ce74c5d29f6920a6ddf8c834ff0bdcc5435d62495a11b51\": rpc error: code = NotFound desc = an error occurred when try to find container \"ff7d7116f7c7d7343ce74c5d29f6920a6ddf8c834ff0bdcc5435d62495a11b51\": not found" Feb 12 21:55:57.636413 kubelet[2252]: I0212 21:55:57.636380 2252 scope.go:115] "RemoveContainer" containerID="9e26e92ac4faebdc74dd0071b5875eac50a8f0182db9d5a6be477adacf62e8d7" Feb 12 21:55:57.636535 kubelet[2252]: E0212 21:55:57.636515 2252 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9e26e92ac4faebdc74dd0071b5875eac50a8f0182db9d5a6be477adacf62e8d7\": not found" containerID="9e26e92ac4faebdc74dd0071b5875eac50a8f0182db9d5a6be477adacf62e8d7" Feb 12 21:55:57.636535 kubelet[2252]: I0212 21:55:57.636526 2252 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:9e26e92ac4faebdc74dd0071b5875eac50a8f0182db9d5a6be477adacf62e8d7} err="failed to get container status \"9e26e92ac4faebdc74dd0071b5875eac50a8f0182db9d5a6be477adacf62e8d7\": rpc error: code = NotFound desc = an error occurred when try to find container \"9e26e92ac4faebdc74dd0071b5875eac50a8f0182db9d5a6be477adacf62e8d7\": not found" Feb 12 21:55:57.636535 kubelet[2252]: I0212 21:55:57.636532 2252 scope.go:115] "RemoveContainer" containerID="4e794a24e40de42cfa60fc05d30d226dd8f74a43c95b80f34b2ff42d4f0a5134" Feb 12 21:55:57.636678 env[1242]: time="2024-02-12T21:55:57.636450829Z" level=error msg="ContainerStatus for \"9e26e92ac4faebdc74dd0071b5875eac50a8f0182db9d5a6be477adacf62e8d7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9e26e92ac4faebdc74dd0071b5875eac50a8f0182db9d5a6be477adacf62e8d7\": not found" Feb 12 21:55:57.636678 env[1242]: time="2024-02-12T21:55:57.636593713Z" level=error msg="ContainerStatus for \"4e794a24e40de42cfa60fc05d30d226dd8f74a43c95b80f34b2ff42d4f0a5134\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4e794a24e40de42cfa60fc05d30d226dd8f74a43c95b80f34b2ff42d4f0a5134\": not found" Feb 12 21:55:57.636908 kubelet[2252]: E0212 21:55:57.636678 2252 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4e794a24e40de42cfa60fc05d30d226dd8f74a43c95b80f34b2ff42d4f0a5134\": not found" containerID="4e794a24e40de42cfa60fc05d30d226dd8f74a43c95b80f34b2ff42d4f0a5134" Feb 12 21:55:57.636908 kubelet[2252]: I0212 21:55:57.636693 2252 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:4e794a24e40de42cfa60fc05d30d226dd8f74a43c95b80f34b2ff42d4f0a5134} err="failed to get container status \"4e794a24e40de42cfa60fc05d30d226dd8f74a43c95b80f34b2ff42d4f0a5134\": rpc error: code = NotFound desc = an error occurred when try to find container \"4e794a24e40de42cfa60fc05d30d226dd8f74a43c95b80f34b2ff42d4f0a5134\": not found" Feb 12 21:55:57.636908 kubelet[2252]: I0212 21:55:57.636701 2252 scope.go:115] "RemoveContainer" containerID="a80f123eff9bd7aadfeba02fdce06fd44a86f0f26dbdab0194f22d2e9cbe827b" Feb 12 21:55:57.637216 env[1242]: time="2024-02-12T21:55:57.637202252Z" level=info msg="RemoveContainer for \"a80f123eff9bd7aadfeba02fdce06fd44a86f0f26dbdab0194f22d2e9cbe827b\"" Feb 12 21:55:57.638671 env[1242]: time="2024-02-12T21:55:57.638651627Z" level=info msg="RemoveContainer for \"a80f123eff9bd7aadfeba02fdce06fd44a86f0f26dbdab0194f22d2e9cbe827b\" returns successfully" Feb 12 21:55:57.638748 kubelet[2252]: I0212 21:55:57.638729 2252 scope.go:115] "RemoveContainer" containerID="a80f123eff9bd7aadfeba02fdce06fd44a86f0f26dbdab0194f22d2e9cbe827b" Feb 12 21:55:57.638931 env[1242]: time="2024-02-12T21:55:57.638894113Z" level=error msg="ContainerStatus for \"a80f123eff9bd7aadfeba02fdce06fd44a86f0f26dbdab0194f22d2e9cbe827b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a80f123eff9bd7aadfeba02fdce06fd44a86f0f26dbdab0194f22d2e9cbe827b\": not found" Feb 12 21:55:57.639051 kubelet[2252]: E0212 21:55:57.639034 2252 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a80f123eff9bd7aadfeba02fdce06fd44a86f0f26dbdab0194f22d2e9cbe827b\": not found" containerID="a80f123eff9bd7aadfeba02fdce06fd44a86f0f26dbdab0194f22d2e9cbe827b" Feb 12 21:55:57.639051 kubelet[2252]: I0212 21:55:57.639052 2252 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:a80f123eff9bd7aadfeba02fdce06fd44a86f0f26dbdab0194f22d2e9cbe827b} err="failed to get container status \"a80f123eff9bd7aadfeba02fdce06fd44a86f0f26dbdab0194f22d2e9cbe827b\": rpc error: code = NotFound desc = an error occurred when try to find container \"a80f123eff9bd7aadfeba02fdce06fd44a86f0f26dbdab0194f22d2e9cbe827b\": not found" Feb 12 21:55:57.942280 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a67c68620bfee1697c23b602f1d44844d74f82aee7fb2284f2a11cc6395d54b-rootfs.mount: Deactivated successfully. Feb 12 21:55:57.942369 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9a67c68620bfee1697c23b602f1d44844d74f82aee7fb2284f2a11cc6395d54b-shm.mount: Deactivated successfully. Feb 12 21:55:57.942434 systemd[1]: var-lib-kubelet-pods-b5ccdbcd\x2dfe44\x2d4418\x2da186\x2da9a9a534702c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7jfhb.mount: Deactivated successfully. Feb 12 21:55:57.942512 systemd[1]: var-lib-kubelet-pods-ba5bccf0\x2d1c02\x2d4cc9\x2d9b14\x2d4e62a38be5e6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmccpp.mount: Deactivated successfully. Feb 12 21:55:57.942593 systemd[1]: var-lib-kubelet-pods-ba5bccf0\x2d1c02\x2d4cc9\x2d9b14\x2d4e62a38be5e6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 21:55:57.942671 systemd[1]: var-lib-kubelet-pods-ba5bccf0\x2d1c02\x2d4cc9\x2d9b14\x2d4e62a38be5e6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 21:55:58.276171 kubelet[2252]: I0212 21:55:58.276149 2252 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=b5ccdbcd-fe44-4418-a186-a9a9a534702c path="/var/lib/kubelet/pods/b5ccdbcd-fe44-4418-a186-a9a9a534702c/volumes" Feb 12 21:55:58.276826 kubelet[2252]: I0212 21:55:58.276813 2252 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6 path="/var/lib/kubelet/pods/ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6/volumes" Feb 12 21:55:58.858684 sshd[3929]: pam_unix(sshd:session): session closed for user core Feb 12 21:55:58.860375 systemd[1]: Started sshd@23-139.178.70.99:22-139.178.89.65:34566.service. Feb 12 21:55:58.864413 systemd[1]: sshd@22-139.178.70.99:22-139.178.89.65:50302.service: Deactivated successfully. Feb 12 21:55:58.864915 systemd[1]: session-24.scope: Deactivated successfully. Feb 12 21:55:58.865349 systemd-logind[1224]: Session 24 logged out. Waiting for processes to exit. Feb 12 21:55:58.865826 systemd-logind[1224]: Removed session 24. Feb 12 21:55:58.956097 sshd[4098]: Accepted publickey for core from 139.178.89.65 port 34566 ssh2: RSA SHA256:HiqmCZ5wMmSvO0wWrhK3vjnBlpa7aAHv9/SVtM7jhV0 Feb 12 21:55:58.956994 sshd[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:55:58.967573 systemd[1]: Started session-25.scope. Feb 12 21:55:58.967713 systemd-logind[1224]: New session 25 of user core. Feb 12 21:55:59.351473 kubelet[2252]: E0212 21:55:59.351451 2252 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 21:55:59.499912 systemd[1]: Started sshd@24-139.178.70.99:22-139.178.89.65:34580.service. Feb 12 21:55:59.507496 sshd[4098]: pam_unix(sshd:session): session closed for user core Feb 12 21:55:59.510158 systemd[1]: sshd@23-139.178.70.99:22-139.178.89.65:34566.service: Deactivated successfully. Feb 12 21:55:59.510926 systemd-logind[1224]: Session 25 logged out. Waiting for processes to exit. Feb 12 21:55:59.511003 systemd[1]: session-25.scope: Deactivated successfully. Feb 12 21:55:59.511751 systemd-logind[1224]: Removed session 25. Feb 12 21:55:59.517746 kubelet[2252]: I0212 21:55:59.516168 2252 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:55:59.517746 kubelet[2252]: E0212 21:55:59.516886 2252 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6" containerName="apply-sysctl-overwrites" Feb 12 21:55:59.517746 kubelet[2252]: E0212 21:55:59.516901 2252 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b5ccdbcd-fe44-4418-a186-a9a9a534702c" containerName="cilium-operator" Feb 12 21:55:59.517746 kubelet[2252]: E0212 21:55:59.516906 2252 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6" containerName="mount-bpf-fs" Feb 12 21:55:59.517746 kubelet[2252]: E0212 21:55:59.516910 2252 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6" containerName="cilium-agent" Feb 12 21:55:59.517746 kubelet[2252]: E0212 21:55:59.516915 2252 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6" containerName="mount-cgroup" Feb 12 21:55:59.517746 kubelet[2252]: E0212 21:55:59.516918 2252 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6" containerName="clean-cilium-state" Feb 12 21:55:59.517746 kubelet[2252]: I0212 21:55:59.516948 2252 memory_manager.go:346] "RemoveStaleState removing state" podUID="b5ccdbcd-fe44-4418-a186-a9a9a534702c" containerName="cilium-operator" Feb 12 21:55:59.517746 kubelet[2252]: I0212 21:55:59.516953 2252 memory_manager.go:346] "RemoveStaleState removing state" podUID="ba5bccf0-1c02-4cc9-9b14-4e62a38be5e6" containerName="cilium-agent" Feb 12 21:55:59.549139 sshd[4109]: Accepted publickey for core from 139.178.89.65 port 34580 ssh2: RSA SHA256:HiqmCZ5wMmSvO0wWrhK3vjnBlpa7aAHv9/SVtM7jhV0 Feb 12 21:55:59.551689 sshd[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:55:59.554372 systemd-logind[1224]: New session 26 of user core. Feb 12 21:55:59.554729 systemd[1]: Started session-26.scope. Feb 12 21:55:59.657243 kubelet[2252]: I0212 21:55:59.657166 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-cilium-run\") pod \"cilium-cg4lx\" (UID: \"93808980-6df7-4201-8f6a-88e4a899ed5c\") " pod="kube-system/cilium-cg4lx" Feb 12 21:55:59.657243 kubelet[2252]: I0212 21:55:59.657197 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-etc-cni-netd\") pod \"cilium-cg4lx\" (UID: \"93808980-6df7-4201-8f6a-88e4a899ed5c\") " pod="kube-system/cilium-cg4lx" Feb 12 21:55:59.657243 kubelet[2252]: I0212 21:55:59.657212 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2ns9\" (UniqueName: \"kubernetes.io/projected/93808980-6df7-4201-8f6a-88e4a899ed5c-kube-api-access-w2ns9\") pod \"cilium-cg4lx\" (UID: \"93808980-6df7-4201-8f6a-88e4a899ed5c\") " pod="kube-system/cilium-cg4lx" Feb 12 21:55:59.657243 kubelet[2252]: I0212 21:55:59.657226 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/93808980-6df7-4201-8f6a-88e4a899ed5c-clustermesh-secrets\") pod \"cilium-cg4lx\" (UID: \"93808980-6df7-4201-8f6a-88e4a899ed5c\") " pod="kube-system/cilium-cg4lx" Feb 12 21:55:59.657243 kubelet[2252]: I0212 21:55:59.657239 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/93808980-6df7-4201-8f6a-88e4a899ed5c-cilium-ipsec-secrets\") pod \"cilium-cg4lx\" (UID: \"93808980-6df7-4201-8f6a-88e4a899ed5c\") " pod="kube-system/cilium-cg4lx" Feb 12 21:55:59.657243 kubelet[2252]: I0212 21:55:59.657250 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-bpf-maps\") pod \"cilium-cg4lx\" (UID: \"93808980-6df7-4201-8f6a-88e4a899ed5c\") " pod="kube-system/cilium-cg4lx" Feb 12 21:55:59.657430 kubelet[2252]: I0212 21:55:59.657264 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-cilium-cgroup\") pod \"cilium-cg4lx\" (UID: \"93808980-6df7-4201-8f6a-88e4a899ed5c\") " pod="kube-system/cilium-cg4lx" Feb 12 21:55:59.657430 kubelet[2252]: I0212 21:55:59.657280 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-host-proc-sys-kernel\") pod \"cilium-cg4lx\" (UID: \"93808980-6df7-4201-8f6a-88e4a899ed5c\") " pod="kube-system/cilium-cg4lx" Feb 12 21:55:59.657430 kubelet[2252]: I0212 21:55:59.657291 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-cni-path\") pod \"cilium-cg4lx\" (UID: \"93808980-6df7-4201-8f6a-88e4a899ed5c\") " pod="kube-system/cilium-cg4lx" Feb 12 21:55:59.657430 kubelet[2252]: I0212 21:55:59.657301 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-host-proc-sys-net\") pod \"cilium-cg4lx\" (UID: \"93808980-6df7-4201-8f6a-88e4a899ed5c\") " pod="kube-system/cilium-cg4lx" Feb 12 21:55:59.657430 kubelet[2252]: I0212 21:55:59.657314 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-hostproc\") pod \"cilium-cg4lx\" (UID: \"93808980-6df7-4201-8f6a-88e4a899ed5c\") " pod="kube-system/cilium-cg4lx" Feb 12 21:55:59.657430 kubelet[2252]: I0212 21:55:59.657326 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-xtables-lock\") pod \"cilium-cg4lx\" (UID: \"93808980-6df7-4201-8f6a-88e4a899ed5c\") " pod="kube-system/cilium-cg4lx" Feb 12 21:55:59.657539 kubelet[2252]: I0212 21:55:59.657338 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/93808980-6df7-4201-8f6a-88e4a899ed5c-hubble-tls\") pod \"cilium-cg4lx\" (UID: \"93808980-6df7-4201-8f6a-88e4a899ed5c\") " pod="kube-system/cilium-cg4lx" Feb 12 21:55:59.657539 kubelet[2252]: I0212 21:55:59.657351 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/93808980-6df7-4201-8f6a-88e4a899ed5c-cilium-config-path\") pod \"cilium-cg4lx\" (UID: \"93808980-6df7-4201-8f6a-88e4a899ed5c\") " pod="kube-system/cilium-cg4lx" Feb 12 21:55:59.657539 kubelet[2252]: I0212 21:55:59.657361 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-lib-modules\") pod \"cilium-cg4lx\" (UID: \"93808980-6df7-4201-8f6a-88e4a899ed5c\") " pod="kube-system/cilium-cg4lx" Feb 12 21:55:59.687134 systemd[1]: Started sshd@25-139.178.70.99:22-139.178.89.65:34584.service. Feb 12 21:55:59.694809 sshd[4109]: pam_unix(sshd:session): session closed for user core Feb 12 21:55:59.702096 systemd[1]: sshd@24-139.178.70.99:22-139.178.89.65:34580.service: Deactivated successfully. Feb 12 21:55:59.703197 systemd[1]: session-26.scope: Deactivated successfully. Feb 12 21:55:59.704834 systemd-logind[1224]: Session 26 logged out. Waiting for processes to exit. Feb 12 21:55:59.706684 systemd-logind[1224]: Removed session 26. Feb 12 21:55:59.726320 sshd[4122]: Accepted publickey for core from 139.178.89.65 port 34584 ssh2: RSA SHA256:HiqmCZ5wMmSvO0wWrhK3vjnBlpa7aAHv9/SVtM7jhV0 Feb 12 21:55:59.727352 sshd[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:55:59.729830 systemd-logind[1224]: New session 27 of user core. Feb 12 21:55:59.730670 systemd[1]: Started session-27.scope. Feb 12 21:55:59.825202 env[1242]: time="2024-02-12T21:55:59.825149323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cg4lx,Uid:93808980-6df7-4201-8f6a-88e4a899ed5c,Namespace:kube-system,Attempt:0,}" Feb 12 21:55:59.891532 env[1242]: time="2024-02-12T21:55:59.891490356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:55:59.891651 env[1242]: time="2024-02-12T21:55:59.891532875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:55:59.891651 env[1242]: time="2024-02-12T21:55:59.891554890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:55:59.891765 env[1242]: time="2024-02-12T21:55:59.891743072Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b3e4d899431714ccbf342e831018a9bc59ce705a06ddd7b01bb5c4fd534f268a pid=4144 runtime=io.containerd.runc.v2 Feb 12 21:55:59.920865 env[1242]: time="2024-02-12T21:55:59.920351938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cg4lx,Uid:93808980-6df7-4201-8f6a-88e4a899ed5c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3e4d899431714ccbf342e831018a9bc59ce705a06ddd7b01bb5c4fd534f268a\"" Feb 12 21:55:59.924657 env[1242]: time="2024-02-12T21:55:59.924630382Z" level=info msg="CreateContainer within sandbox \"b3e4d899431714ccbf342e831018a9bc59ce705a06ddd7b01bb5c4fd534f268a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 21:55:59.931077 env[1242]: time="2024-02-12T21:55:59.931038797Z" level=info msg="CreateContainer within sandbox \"b3e4d899431714ccbf342e831018a9bc59ce705a06ddd7b01bb5c4fd534f268a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"319e7fc1a1b91b58004e59a726ec733124bdfa4267c85c0ecbdd047ccc17abd9\"" Feb 12 21:55:59.932239 env[1242]: time="2024-02-12T21:55:59.931875796Z" level=info msg="StartContainer for \"319e7fc1a1b91b58004e59a726ec733124bdfa4267c85c0ecbdd047ccc17abd9\"" Feb 12 21:55:59.979285 env[1242]: time="2024-02-12T21:55:59.979243682Z" level=info msg="StartContainer for \"319e7fc1a1b91b58004e59a726ec733124bdfa4267c85c0ecbdd047ccc17abd9\" returns successfully" Feb 12 21:56:00.084180 env[1242]: time="2024-02-12T21:56:00.084150027Z" level=info msg="shim disconnected" id=319e7fc1a1b91b58004e59a726ec733124bdfa4267c85c0ecbdd047ccc17abd9 Feb 12 21:56:00.084376 env[1242]: time="2024-02-12T21:56:00.084363450Z" level=warning msg="cleaning up after shim disconnected" id=319e7fc1a1b91b58004e59a726ec733124bdfa4267c85c0ecbdd047ccc17abd9 namespace=k8s.io Feb 12 21:56:00.084438 env[1242]: time="2024-02-12T21:56:00.084428696Z" level=info msg="cleaning up dead shim" Feb 12 21:56:00.089016 env[1242]: time="2024-02-12T21:56:00.088992281Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:56:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4228 runtime=io.containerd.runc.v2\n" Feb 12 21:56:00.582012 env[1242]: time="2024-02-12T21:56:00.581981009Z" level=info msg="StopPodSandbox for \"b3e4d899431714ccbf342e831018a9bc59ce705a06ddd7b01bb5c4fd534f268a\"" Feb 12 21:56:00.582153 env[1242]: time="2024-02-12T21:56:00.582137781Z" level=info msg="Container to stop \"319e7fc1a1b91b58004e59a726ec733124bdfa4267c85c0ecbdd047ccc17abd9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 21:56:00.630842 env[1242]: time="2024-02-12T21:56:00.630805302Z" level=info msg="shim disconnected" id=b3e4d899431714ccbf342e831018a9bc59ce705a06ddd7b01bb5c4fd534f268a Feb 12 21:56:00.630842 env[1242]: time="2024-02-12T21:56:00.630839692Z" level=warning msg="cleaning up after shim disconnected" id=b3e4d899431714ccbf342e831018a9bc59ce705a06ddd7b01bb5c4fd534f268a namespace=k8s.io Feb 12 21:56:00.630842 env[1242]: time="2024-02-12T21:56:00.630846206Z" level=info msg="cleaning up dead shim" Feb 12 21:56:00.636000 env[1242]: time="2024-02-12T21:56:00.635971823Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:56:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4260 runtime=io.containerd.runc.v2\n" Feb 12 21:56:00.636345 env[1242]: time="2024-02-12T21:56:00.636328679Z" level=info msg="TearDown network for sandbox \"b3e4d899431714ccbf342e831018a9bc59ce705a06ddd7b01bb5c4fd534f268a\" successfully" Feb 12 21:56:00.636411 env[1242]: time="2024-02-12T21:56:00.636399361Z" level=info msg="StopPodSandbox for \"b3e4d899431714ccbf342e831018a9bc59ce705a06ddd7b01bb5c4fd534f268a\" returns successfully" Feb 12 21:56:00.763900 kubelet[2252]: I0212 21:56:00.763855 2252 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-bpf-maps\") pod \"93808980-6df7-4201-8f6a-88e4a899ed5c\" (UID: \"93808980-6df7-4201-8f6a-88e4a899ed5c\") " Feb 12 21:56:00.764207 kubelet[2252]: I0212 21:56:00.763946 2252 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "93808980-6df7-4201-8f6a-88e4a899ed5c" (UID: "93808980-6df7-4201-8f6a-88e4a899ed5c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:56:00.764207 kubelet[2252]: I0212 21:56:00.763903 2252 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/93808980-6df7-4201-8f6a-88e4a899ed5c-clustermesh-secrets\") pod \"93808980-6df7-4201-8f6a-88e4a899ed5c\" (UID: \"93808980-6df7-4201-8f6a-88e4a899ed5c\") " Feb 12 21:56:00.764274 kubelet[2252]: I0212 21:56:00.764232 2252 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/93808980-6df7-4201-8f6a-88e4a899ed5c-cilium-config-path\") pod \"93808980-6df7-4201-8f6a-88e4a899ed5c\" (UID: \"93808980-6df7-4201-8f6a-88e4a899ed5c\") " Feb 12 21:56:00.764274 kubelet[2252]: I0212 21:56:00.764254 2252 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-xtables-lock\") pod \"93808980-6df7-4201-8f6a-88e4a899ed5c\" (UID: \"93808980-6df7-4201-8f6a-88e4a899ed5c\") " Feb 12 21:56:00.764274 kubelet[2252]: I0212 21:56:00.764273 2252 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2ns9\" (UniqueName: \"kubernetes.io/projected/93808980-6df7-4201-8f6a-88e4a899ed5c-kube-api-access-w2ns9\") pod \"93808980-6df7-4201-8f6a-88e4a899ed5c\" (UID: \"93808980-6df7-4201-8f6a-88e4a899ed5c\") " Feb 12 21:56:00.764356 kubelet[2252]: I0212 21:56:00.764289 2252 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/93808980-6df7-4201-8f6a-88e4a899ed5c-hubble-tls\") pod \"93808980-6df7-4201-8f6a-88e4a899ed5c\" (UID: \"93808980-6df7-4201-8f6a-88e4a899ed5c\") " Feb 12 21:56:00.764356 kubelet[2252]: I0212 21:56:00.764312 2252 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-lib-modules\") pod \"93808980-6df7-4201-8f6a-88e4a899ed5c\" (UID: \"93808980-6df7-4201-8f6a-88e4a899ed5c\") " Feb 12 21:56:00.764356 kubelet[2252]: I0212 21:56:00.764326 2252 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-hostproc\") pod \"93808980-6df7-4201-8f6a-88e4a899ed5c\" (UID: \"93808980-6df7-4201-8f6a-88e4a899ed5c\") " Feb 12 21:56:00.764356 kubelet[2252]: I0212 21:56:00.764339 2252 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-cilium-cgroup\") pod \"93808980-6df7-4201-8f6a-88e4a899ed5c\" (UID: \"93808980-6df7-4201-8f6a-88e4a899ed5c\") " Feb 12 21:56:00.764356 kubelet[2252]: I0212 21:56:00.764352 2252 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-host-proc-sys-net\") pod \"93808980-6df7-4201-8f6a-88e4a899ed5c\" (UID: \"93808980-6df7-4201-8f6a-88e4a899ed5c\") " Feb 12 21:56:00.764477 kubelet[2252]: I0212 21:56:00.764365 2252 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-host-proc-sys-kernel\") pod \"93808980-6df7-4201-8f6a-88e4a899ed5c\" (UID: \"93808980-6df7-4201-8f6a-88e4a899ed5c\") " Feb 12 21:56:00.764477 kubelet[2252]: I0212 21:56:00.764386 2252 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-cni-path\") pod \"93808980-6df7-4201-8f6a-88e4a899ed5c\" (UID: \"93808980-6df7-4201-8f6a-88e4a899ed5c\") " Feb 12 21:56:00.764477 kubelet[2252]: I0212 21:56:00.764399 2252 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-etc-cni-netd\") pod \"93808980-6df7-4201-8f6a-88e4a899ed5c\" (UID: \"93808980-6df7-4201-8f6a-88e4a899ed5c\") " Feb 12 21:56:00.764477 kubelet[2252]: I0212 21:56:00.764416 2252 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/93808980-6df7-4201-8f6a-88e4a899ed5c-cilium-ipsec-secrets\") pod \"93808980-6df7-4201-8f6a-88e4a899ed5c\" (UID: \"93808980-6df7-4201-8f6a-88e4a899ed5c\") " Feb 12 21:56:00.764477 kubelet[2252]: I0212 21:56:00.764429 2252 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-cilium-run\") pod \"93808980-6df7-4201-8f6a-88e4a899ed5c\" (UID: \"93808980-6df7-4201-8f6a-88e4a899ed5c\") " Feb 12 21:56:00.764477 kubelet[2252]: I0212 21:56:00.764466 2252 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 12 21:56:00.764667 kubelet[2252]: I0212 21:56:00.764485 2252 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "93808980-6df7-4201-8f6a-88e4a899ed5c" (UID: "93808980-6df7-4201-8f6a-88e4a899ed5c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:56:00.764667 kubelet[2252]: W0212 21:56:00.764579 2252 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/93808980-6df7-4201-8f6a-88e4a899ed5c/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 21:56:00.767625 kubelet[2252]: I0212 21:56:00.765039 2252 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "93808980-6df7-4201-8f6a-88e4a899ed5c" (UID: "93808980-6df7-4201-8f6a-88e4a899ed5c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:56:00.767625 kubelet[2252]: I0212 21:56:00.765065 2252 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "93808980-6df7-4201-8f6a-88e4a899ed5c" (UID: "93808980-6df7-4201-8f6a-88e4a899ed5c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:56:00.767625 kubelet[2252]: I0212 21:56:00.766196 2252 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93808980-6df7-4201-8f6a-88e4a899ed5c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "93808980-6df7-4201-8f6a-88e4a899ed5c" (UID: "93808980-6df7-4201-8f6a-88e4a899ed5c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 21:56:00.767625 kubelet[2252]: I0212 21:56:00.766219 2252 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "93808980-6df7-4201-8f6a-88e4a899ed5c" (UID: "93808980-6df7-4201-8f6a-88e4a899ed5c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:56:00.767625 kubelet[2252]: I0212 21:56:00.766246 2252 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "93808980-6df7-4201-8f6a-88e4a899ed5c" (UID: "93808980-6df7-4201-8f6a-88e4a899ed5c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:56:00.767063 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b3e4d899431714ccbf342e831018a9bc59ce705a06ddd7b01bb5c4fd534f268a-shm.mount: Deactivated successfully. Feb 12 21:56:00.768005 kubelet[2252]: I0212 21:56:00.766262 2252 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-cni-path" (OuterVolumeSpecName: "cni-path") pod "93808980-6df7-4201-8f6a-88e4a899ed5c" (UID: "93808980-6df7-4201-8f6a-88e4a899ed5c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:56:00.768005 kubelet[2252]: I0212 21:56:00.766275 2252 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "93808980-6df7-4201-8f6a-88e4a899ed5c" (UID: "93808980-6df7-4201-8f6a-88e4a899ed5c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:56:00.768005 kubelet[2252]: I0212 21:56:00.766442 2252 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "93808980-6df7-4201-8f6a-88e4a899ed5c" (UID: "93808980-6df7-4201-8f6a-88e4a899ed5c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:56:00.768005 kubelet[2252]: I0212 21:56:00.766643 2252 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-hostproc" (OuterVolumeSpecName: "hostproc") pod "93808980-6df7-4201-8f6a-88e4a899ed5c" (UID: "93808980-6df7-4201-8f6a-88e4a899ed5c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:56:00.770909 systemd[1]: var-lib-kubelet-pods-93808980\x2d6df7\x2d4201\x2d8f6a\x2d88e4a899ed5c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw2ns9.mount: Deactivated successfully. Feb 12 21:56:00.770986 systemd[1]: var-lib-kubelet-pods-93808980\x2d6df7\x2d4201\x2d8f6a\x2d88e4a899ed5c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 21:56:00.772161 kubelet[2252]: I0212 21:56:00.772138 2252 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93808980-6df7-4201-8f6a-88e4a899ed5c-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "93808980-6df7-4201-8f6a-88e4a899ed5c" (UID: "93808980-6df7-4201-8f6a-88e4a899ed5c"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 21:56:00.773829 kubelet[2252]: I0212 21:56:00.773812 2252 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93808980-6df7-4201-8f6a-88e4a899ed5c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "93808980-6df7-4201-8f6a-88e4a899ed5c" (UID: "93808980-6df7-4201-8f6a-88e4a899ed5c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 21:56:00.774255 systemd[1]: var-lib-kubelet-pods-93808980\x2d6df7\x2d4201\x2d8f6a\x2d88e4a899ed5c-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 21:56:00.774655 kubelet[2252]: I0212 21:56:00.774641 2252 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93808980-6df7-4201-8f6a-88e4a899ed5c-kube-api-access-w2ns9" (OuterVolumeSpecName: "kube-api-access-w2ns9") pod "93808980-6df7-4201-8f6a-88e4a899ed5c" (UID: "93808980-6df7-4201-8f6a-88e4a899ed5c"). InnerVolumeSpecName "kube-api-access-w2ns9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 21:56:00.774954 kubelet[2252]: I0212 21:56:00.774943 2252 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93808980-6df7-4201-8f6a-88e4a899ed5c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "93808980-6df7-4201-8f6a-88e4a899ed5c" (UID: "93808980-6df7-4201-8f6a-88e4a899ed5c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 21:56:00.775284 systemd[1]: var-lib-kubelet-pods-93808980\x2d6df7\x2d4201\x2d8f6a\x2d88e4a899ed5c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 21:56:00.866115 kubelet[2252]: I0212 21:56:00.865148 2252 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 12 21:56:00.866291 kubelet[2252]: I0212 21:56:00.866282 2252 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/93808980-6df7-4201-8f6a-88e4a899ed5c-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 12 21:56:00.866363 kubelet[2252]: I0212 21:56:00.866356 2252 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 12 21:56:00.866418 kubelet[2252]: I0212 21:56:00.866411 2252 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-w2ns9\" (UniqueName: \"kubernetes.io/projected/93808980-6df7-4201-8f6a-88e4a899ed5c-kube-api-access-w2ns9\") on node \"localhost\" DevicePath \"\"" Feb 12 21:56:00.866470 kubelet[2252]: I0212 21:56:00.866463 2252 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 12 21:56:00.866523 kubelet[2252]: I0212 21:56:00.866515 2252 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 12 21:56:00.866586 kubelet[2252]: I0212 21:56:00.866579 2252 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 12 21:56:00.866642 kubelet[2252]: I0212 21:56:00.866635 2252 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 12 21:56:00.866695 kubelet[2252]: I0212 21:56:00.866688 2252 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 12 21:56:00.866745 kubelet[2252]: I0212 21:56:00.866739 2252 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/93808980-6df7-4201-8f6a-88e4a899ed5c-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Feb 12 21:56:00.866798 kubelet[2252]: I0212 21:56:00.866792 2252 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 12 21:56:00.866848 kubelet[2252]: I0212 21:56:00.866841 2252 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/93808980-6df7-4201-8f6a-88e4a899ed5c-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 12 21:56:00.866898 kubelet[2252]: I0212 21:56:00.866892 2252 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/93808980-6df7-4201-8f6a-88e4a899ed5c-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 12 21:56:00.866950 kubelet[2252]: I0212 21:56:00.866943 2252 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/93808980-6df7-4201-8f6a-88e4a899ed5c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 12 21:56:01.583556 kubelet[2252]: I0212 21:56:01.583535 2252 scope.go:115] "RemoveContainer" containerID="319e7fc1a1b91b58004e59a726ec733124bdfa4267c85c0ecbdd047ccc17abd9" Feb 12 21:56:01.585052 env[1242]: time="2024-02-12T21:56:01.584985608Z" level=info msg="RemoveContainer for \"319e7fc1a1b91b58004e59a726ec733124bdfa4267c85c0ecbdd047ccc17abd9\"" Feb 12 21:56:01.587898 env[1242]: time="2024-02-12T21:56:01.587838820Z" level=info msg="RemoveContainer for \"319e7fc1a1b91b58004e59a726ec733124bdfa4267c85c0ecbdd047ccc17abd9\" returns successfully" Feb 12 21:56:01.611614 kubelet[2252]: I0212 21:56:01.610879 2252 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:56:01.611614 kubelet[2252]: E0212 21:56:01.610941 2252 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="93808980-6df7-4201-8f6a-88e4a899ed5c" containerName="mount-cgroup" Feb 12 21:56:01.611614 kubelet[2252]: I0212 21:56:01.610968 2252 memory_manager.go:346] "RemoveStaleState removing state" podUID="93808980-6df7-4201-8f6a-88e4a899ed5c" containerName="mount-cgroup" Feb 12 21:56:01.772300 kubelet[2252]: I0212 21:56:01.772272 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f1473077-124d-4713-a5bb-4d9f6b22ca9f-cilium-run\") pod \"cilium-hrxvq\" (UID: \"f1473077-124d-4713-a5bb-4d9f6b22ca9f\") " pod="kube-system/cilium-hrxvq" Feb 12 21:56:01.772300 kubelet[2252]: I0212 21:56:01.772307 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1473077-124d-4713-a5bb-4d9f6b22ca9f-xtables-lock\") pod \"cilium-hrxvq\" (UID: \"f1473077-124d-4713-a5bb-4d9f6b22ca9f\") " pod="kube-system/cilium-hrxvq" Feb 12 21:56:01.772567 kubelet[2252]: I0212 21:56:01.772338 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f1473077-124d-4713-a5bb-4d9f6b22ca9f-clustermesh-secrets\") pod \"cilium-hrxvq\" (UID: \"f1473077-124d-4713-a5bb-4d9f6b22ca9f\") " pod="kube-system/cilium-hrxvq" Feb 12 21:56:01.772567 kubelet[2252]: I0212 21:56:01.772351 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f1473077-124d-4713-a5bb-4d9f6b22ca9f-cilium-ipsec-secrets\") pod \"cilium-hrxvq\" (UID: \"f1473077-124d-4713-a5bb-4d9f6b22ca9f\") " pod="kube-system/cilium-hrxvq" Feb 12 21:56:01.772567 kubelet[2252]: I0212 21:56:01.772370 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f1473077-124d-4713-a5bb-4d9f6b22ca9f-hostproc\") pod \"cilium-hrxvq\" (UID: \"f1473077-124d-4713-a5bb-4d9f6b22ca9f\") " pod="kube-system/cilium-hrxvq" Feb 12 21:56:01.772567 kubelet[2252]: I0212 21:56:01.772409 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f1473077-124d-4713-a5bb-4d9f6b22ca9f-cilium-cgroup\") pod \"cilium-hrxvq\" (UID: \"f1473077-124d-4713-a5bb-4d9f6b22ca9f\") " pod="kube-system/cilium-hrxvq" Feb 12 21:56:01.772567 kubelet[2252]: I0212 21:56:01.772424 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f1473077-124d-4713-a5bb-4d9f6b22ca9f-cilium-config-path\") pod \"cilium-hrxvq\" (UID: \"f1473077-124d-4713-a5bb-4d9f6b22ca9f\") " pod="kube-system/cilium-hrxvq" Feb 12 21:56:01.772682 kubelet[2252]: I0212 21:56:01.772437 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f1473077-124d-4713-a5bb-4d9f6b22ca9f-host-proc-sys-kernel\") pod \"cilium-hrxvq\" (UID: \"f1473077-124d-4713-a5bb-4d9f6b22ca9f\") " pod="kube-system/cilium-hrxvq" Feb 12 21:56:01.772682 kubelet[2252]: I0212 21:56:01.772500 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f1473077-124d-4713-a5bb-4d9f6b22ca9f-etc-cni-netd\") pod \"cilium-hrxvq\" (UID: \"f1473077-124d-4713-a5bb-4d9f6b22ca9f\") " pod="kube-system/cilium-hrxvq" Feb 12 21:56:01.772682 kubelet[2252]: I0212 21:56:01.772512 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f1473077-124d-4713-a5bb-4d9f6b22ca9f-host-proc-sys-net\") pod \"cilium-hrxvq\" (UID: \"f1473077-124d-4713-a5bb-4d9f6b22ca9f\") " pod="kube-system/cilium-hrxvq" Feb 12 21:56:01.772682 kubelet[2252]: I0212 21:56:01.772553 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f1473077-124d-4713-a5bb-4d9f6b22ca9f-cni-path\") pod \"cilium-hrxvq\" (UID: \"f1473077-124d-4713-a5bb-4d9f6b22ca9f\") " pod="kube-system/cilium-hrxvq" Feb 12 21:56:01.772682 kubelet[2252]: I0212 21:56:01.772622 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1473077-124d-4713-a5bb-4d9f6b22ca9f-lib-modules\") pod \"cilium-hrxvq\" (UID: \"f1473077-124d-4713-a5bb-4d9f6b22ca9f\") " pod="kube-system/cilium-hrxvq" Feb 12 21:56:01.772682 kubelet[2252]: I0212 21:56:01.772637 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8gh4\" (UniqueName: \"kubernetes.io/projected/f1473077-124d-4713-a5bb-4d9f6b22ca9f-kube-api-access-z8gh4\") pod \"cilium-hrxvq\" (UID: \"f1473077-124d-4713-a5bb-4d9f6b22ca9f\") " pod="kube-system/cilium-hrxvq" Feb 12 21:56:01.772793 kubelet[2252]: I0212 21:56:01.772650 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f1473077-124d-4713-a5bb-4d9f6b22ca9f-hubble-tls\") pod \"cilium-hrxvq\" (UID: \"f1473077-124d-4713-a5bb-4d9f6b22ca9f\") " pod="kube-system/cilium-hrxvq" Feb 12 21:56:01.772793 kubelet[2252]: I0212 21:56:01.772690 2252 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f1473077-124d-4713-a5bb-4d9f6b22ca9f-bpf-maps\") pod \"cilium-hrxvq\" (UID: \"f1473077-124d-4713-a5bb-4d9f6b22ca9f\") " pod="kube-system/cilium-hrxvq" Feb 12 21:56:01.923472 env[1242]: time="2024-02-12T21:56:01.923403821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hrxvq,Uid:f1473077-124d-4713-a5bb-4d9f6b22ca9f,Namespace:kube-system,Attempt:0,}" Feb 12 21:56:01.958037 env[1242]: time="2024-02-12T21:56:01.955260241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:56:01.958037 env[1242]: time="2024-02-12T21:56:01.955284151Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:56:01.958037 env[1242]: time="2024-02-12T21:56:01.955290999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:56:01.958037 env[1242]: time="2024-02-12T21:56:01.955358316Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a37d0e83110b6ea1e9313615f52bdf29f36f3b61438858cd22bc7a31bb04e30 pid=4290 runtime=io.containerd.runc.v2 Feb 12 21:56:01.986286 env[1242]: time="2024-02-12T21:56:01.986255417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hrxvq,Uid:f1473077-124d-4713-a5bb-4d9f6b22ca9f,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a37d0e83110b6ea1e9313615f52bdf29f36f3b61438858cd22bc7a31bb04e30\"" Feb 12 21:56:01.989509 env[1242]: time="2024-02-12T21:56:01.989476561Z" level=info msg="CreateContainer within sandbox \"1a37d0e83110b6ea1e9313615f52bdf29f36f3b61438858cd22bc7a31bb04e30\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 21:56:01.996942 env[1242]: time="2024-02-12T21:56:01.996895284Z" level=info msg="CreateContainer within sandbox \"1a37d0e83110b6ea1e9313615f52bdf29f36f3b61438858cd22bc7a31bb04e30\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c6fad019669a96d57f843f403b44708a7cf9c4c7c4ce113ab8b4217fb0422886\"" Feb 12 21:56:01.998739 env[1242]: time="2024-02-12T21:56:01.998712323Z" level=info msg="StartContainer for \"c6fad019669a96d57f843f403b44708a7cf9c4c7c4ce113ab8b4217fb0422886\"" Feb 12 21:56:02.033638 env[1242]: time="2024-02-12T21:56:02.032906950Z" level=info msg="StartContainer for \"c6fad019669a96d57f843f403b44708a7cf9c4c7c4ce113ab8b4217fb0422886\" returns successfully" Feb 12 21:56:02.058883 env[1242]: time="2024-02-12T21:56:02.058848229Z" level=info msg="shim disconnected" id=c6fad019669a96d57f843f403b44708a7cf9c4c7c4ce113ab8b4217fb0422886 Feb 12 21:56:02.059125 env[1242]: time="2024-02-12T21:56:02.059111458Z" level=warning msg="cleaning up after shim disconnected" id=c6fad019669a96d57f843f403b44708a7cf9c4c7c4ce113ab8b4217fb0422886 namespace=k8s.io Feb 12 21:56:02.059194 env[1242]: time="2024-02-12T21:56:02.059184208Z" level=info msg="cleaning up dead shim" Feb 12 21:56:02.065298 env[1242]: time="2024-02-12T21:56:02.065261608Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:56:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4374 runtime=io.containerd.runc.v2\n" Feb 12 21:56:02.255983 sshd[3887]: Received disconnect from 36.112.156.46 port 60801:11: Bye Bye [preauth] Feb 12 21:56:02.255983 sshd[3887]: Disconnected from 36.112.156.46 port 60801 [preauth] Feb 12 21:56:02.256630 systemd[1]: sshd@18-139.178.70.99:22-36.112.156.46:60801.service: Deactivated successfully. Feb 12 21:56:02.274701 env[1242]: time="2024-02-12T21:56:02.274674841Z" level=info msg="StopPodSandbox for \"b3e4d899431714ccbf342e831018a9bc59ce705a06ddd7b01bb5c4fd534f268a\"" Feb 12 21:56:02.274883 env[1242]: time="2024-02-12T21:56:02.274847066Z" level=info msg="TearDown network for sandbox \"b3e4d899431714ccbf342e831018a9bc59ce705a06ddd7b01bb5c4fd534f268a\" successfully" Feb 12 21:56:02.274963 env[1242]: time="2024-02-12T21:56:02.274952665Z" level=info msg="StopPodSandbox for \"b3e4d899431714ccbf342e831018a9bc59ce705a06ddd7b01bb5c4fd534f268a\" returns successfully" Feb 12 21:56:02.275131 kubelet[2252]: I0212 21:56:02.275117 2252 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=93808980-6df7-4201-8f6a-88e4a899ed5c path="/var/lib/kubelet/pods/93808980-6df7-4201-8f6a-88e4a899ed5c/volumes" Feb 12 21:56:02.596945 env[1242]: time="2024-02-12T21:56:02.596863415Z" level=info msg="CreateContainer within sandbox \"1a37d0e83110b6ea1e9313615f52bdf29f36f3b61438858cd22bc7a31bb04e30\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 21:56:02.613101 env[1242]: time="2024-02-12T21:56:02.609191878Z" level=info msg="CreateContainer within sandbox \"1a37d0e83110b6ea1e9313615f52bdf29f36f3b61438858cd22bc7a31bb04e30\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d3f12eb3d59433df7779590f5489e58a26f7dd90d1f474f3d68908b4a6c445f5\"" Feb 12 21:56:02.619905 env[1242]: time="2024-02-12T21:56:02.619864473Z" level=info msg="StartContainer for \"d3f12eb3d59433df7779590f5489e58a26f7dd90d1f474f3d68908b4a6c445f5\"" Feb 12 21:56:02.661394 env[1242]: time="2024-02-12T21:56:02.661357667Z" level=info msg="StartContainer for \"d3f12eb3d59433df7779590f5489e58a26f7dd90d1f474f3d68908b4a6c445f5\" returns successfully" Feb 12 21:56:02.693079 env[1242]: time="2024-02-12T21:56:02.693026785Z" level=info msg="shim disconnected" id=d3f12eb3d59433df7779590f5489e58a26f7dd90d1f474f3d68908b4a6c445f5 Feb 12 21:56:02.693079 env[1242]: time="2024-02-12T21:56:02.693076629Z" level=warning msg="cleaning up after shim disconnected" id=d3f12eb3d59433df7779590f5489e58a26f7dd90d1f474f3d68908b4a6c445f5 namespace=k8s.io Feb 12 21:56:02.693079 env[1242]: time="2024-02-12T21:56:02.693084669Z" level=info msg="cleaning up dead shim" Feb 12 21:56:02.698162 env[1242]: time="2024-02-12T21:56:02.698131707Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:56:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4437 runtime=io.containerd.runc.v2\n" Feb 12 21:56:03.590698 env[1242]: time="2024-02-12T21:56:03.590665484Z" level=info msg="CreateContainer within sandbox \"1a37d0e83110b6ea1e9313615f52bdf29f36f3b61438858cd22bc7a31bb04e30\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 21:56:03.615491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4039409865.mount: Deactivated successfully. Feb 12 21:56:03.638413 env[1242]: time="2024-02-12T21:56:03.638376385Z" level=info msg="CreateContainer within sandbox \"1a37d0e83110b6ea1e9313615f52bdf29f36f3b61438858cd22bc7a31bb04e30\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e39df9626bb18e2546fdb62769fbbb0da48e906bf1fd7b39b3ed9cda87671c6f\"" Feb 12 21:56:03.638841 env[1242]: time="2024-02-12T21:56:03.638819799Z" level=info msg="StartContainer for \"e39df9626bb18e2546fdb62769fbbb0da48e906bf1fd7b39b3ed9cda87671c6f\"" Feb 12 21:56:03.671743 env[1242]: time="2024-02-12T21:56:03.671714571Z" level=info msg="StartContainer for \"e39df9626bb18e2546fdb62769fbbb0da48e906bf1fd7b39b3ed9cda87671c6f\" returns successfully" Feb 12 21:56:03.697353 env[1242]: time="2024-02-12T21:56:03.697309483Z" level=info msg="shim disconnected" id=e39df9626bb18e2546fdb62769fbbb0da48e906bf1fd7b39b3ed9cda87671c6f Feb 12 21:56:03.697559 env[1242]: time="2024-02-12T21:56:03.697542775Z" level=warning msg="cleaning up after shim disconnected" id=e39df9626bb18e2546fdb62769fbbb0da48e906bf1fd7b39b3ed9cda87671c6f namespace=k8s.io Feb 12 21:56:03.697679 env[1242]: time="2024-02-12T21:56:03.697667733Z" level=info msg="cleaning up dead shim" Feb 12 21:56:03.705024 env[1242]: time="2024-02-12T21:56:03.704996076Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:56:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4494 runtime=io.containerd.runc.v2\n" Feb 12 21:56:03.881691 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e39df9626bb18e2546fdb62769fbbb0da48e906bf1fd7b39b3ed9cda87671c6f-rootfs.mount: Deactivated successfully. Feb 12 21:56:04.352942 kubelet[2252]: E0212 21:56:04.352918 2252 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 21:56:04.605770 env[1242]: time="2024-02-12T21:56:04.605165397Z" level=info msg="CreateContainer within sandbox \"1a37d0e83110b6ea1e9313615f52bdf29f36f3b61438858cd22bc7a31bb04e30\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 21:56:04.643675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3383603263.mount: Deactivated successfully. Feb 12 21:56:04.646664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount187401075.mount: Deactivated successfully. Feb 12 21:56:04.659079 env[1242]: time="2024-02-12T21:56:04.659040320Z" level=info msg="CreateContainer within sandbox \"1a37d0e83110b6ea1e9313615f52bdf29f36f3b61438858cd22bc7a31bb04e30\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3a0f0e7c072336e8a12d331bec08806907b1fc0d1ee2bcc5a284853cdc2bfa9e\"" Feb 12 21:56:04.660196 env[1242]: time="2024-02-12T21:56:04.659555503Z" level=info msg="StartContainer for \"3a0f0e7c072336e8a12d331bec08806907b1fc0d1ee2bcc5a284853cdc2bfa9e\"" Feb 12 21:56:04.691741 env[1242]: time="2024-02-12T21:56:04.691570495Z" level=info msg="StartContainer for \"3a0f0e7c072336e8a12d331bec08806907b1fc0d1ee2bcc5a284853cdc2bfa9e\" returns successfully" Feb 12 21:56:04.708595 env[1242]: time="2024-02-12T21:56:04.708552756Z" level=info msg="shim disconnected" id=3a0f0e7c072336e8a12d331bec08806907b1fc0d1ee2bcc5a284853cdc2bfa9e Feb 12 21:56:04.708727 env[1242]: time="2024-02-12T21:56:04.708592287Z" level=warning msg="cleaning up after shim disconnected" id=3a0f0e7c072336e8a12d331bec08806907b1fc0d1ee2bcc5a284853cdc2bfa9e namespace=k8s.io Feb 12 21:56:04.708727 env[1242]: time="2024-02-12T21:56:04.708617165Z" level=info msg="cleaning up dead shim" Feb 12 21:56:04.714681 env[1242]: time="2024-02-12T21:56:04.714533423Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:56:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4553 runtime=io.containerd.runc.v2\n" Feb 12 21:56:05.609505 env[1242]: time="2024-02-12T21:56:05.609462152Z" level=info msg="CreateContainer within sandbox \"1a37d0e83110b6ea1e9313615f52bdf29f36f3b61438858cd22bc7a31bb04e30\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 21:56:05.664065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2590424378.mount: Deactivated successfully. Feb 12 21:56:05.667808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1395124144.mount: Deactivated successfully. Feb 12 21:56:05.681147 env[1242]: time="2024-02-12T21:56:05.681114179Z" level=info msg="CreateContainer within sandbox \"1a37d0e83110b6ea1e9313615f52bdf29f36f3b61438858cd22bc7a31bb04e30\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"22123d54b13ed5084c282aa96b142f318d5b7d9813701261793f53e19b6c80ad\"" Feb 12 21:56:05.682771 env[1242]: time="2024-02-12T21:56:05.681889107Z" level=info msg="StartContainer for \"22123d54b13ed5084c282aa96b142f318d5b7d9813701261793f53e19b6c80ad\"" Feb 12 21:56:05.723756 env[1242]: time="2024-02-12T21:56:05.723715673Z" level=info msg="StartContainer for \"22123d54b13ed5084c282aa96b142f318d5b7d9813701261793f53e19b6c80ad\" returns successfully" Feb 12 21:56:06.211323 kubelet[2252]: I0212 21:56:06.211304 2252 setters.go:548] "Node became not ready" node="localhost" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-12 21:56:06.211206066 +0000 UTC m=+152.028208215 LastTransitionTime:2024-02-12 21:56:06.211206066 +0000 UTC m=+152.028208215 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 12 21:56:06.578704 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 12 21:56:08.027490 systemd[1]: run-containerd-runc-k8s.io-22123d54b13ed5084c282aa96b142f318d5b7d9813701261793f53e19b6c80ad-runc.Vysrr4.mount: Deactivated successfully. Feb 12 21:56:08.935832 systemd-networkd[1114]: lxc_health: Link UP Feb 12 21:56:08.942373 systemd-networkd[1114]: lxc_health: Gained carrier Feb 12 21:56:08.942620 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 21:56:09.936798 kubelet[2252]: I0212 21:56:09.936770 2252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-hrxvq" podStartSLOduration=8.935500875 pod.CreationTimestamp="2024-02-12 21:56:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:56:06.625353577 +0000 UTC m=+152.442355735" watchObservedRunningTime="2024-02-12 21:56:09.935500875 +0000 UTC m=+155.752503027" Feb 12 21:56:10.546696 systemd-networkd[1114]: lxc_health: Gained IPv6LL Feb 12 21:56:12.394214 systemd[1]: run-containerd-runc-k8s.io-22123d54b13ed5084c282aa96b142f318d5b7d9813701261793f53e19b6c80ad-runc.BSW12F.mount: Deactivated successfully. Feb 12 21:56:14.512352 sshd[4122]: pam_unix(sshd:session): session closed for user core Feb 12 21:56:14.518160 systemd[1]: sshd@25-139.178.70.99:22-139.178.89.65:34584.service: Deactivated successfully. Feb 12 21:56:14.518633 systemd[1]: session-27.scope: Deactivated successfully. Feb 12 21:56:14.519512 systemd-logind[1224]: Session 27 logged out. Waiting for processes to exit. Feb 12 21:56:14.520055 systemd-logind[1224]: Removed session 27.