|
| 1 | +--- |
| 2 | +title: DeepSpeed Accelerator Setup Guides |
| 3 | +tags: getting-started |
| 4 | +--- |
| 5 | + |
| 6 | +# Contents |
| 7 | +- [Contents](#contents) |
| 8 | +- [Introduction](#introduction) |
| 9 | +- [Intel Architecture (IA) CPU](#intel-architecture-ia-cpu) |
| 10 | +- [Intel XPU](#intel-xpu) |
| 11 | + |
| 12 | +# Introduction |
| 13 | +DeepSpeed supports different accelerators from different companies. Setup steps to run DeepSpeed on certain accelerators might be different. This guide allows user to lookup setup instructions for the accelerator family and hardware they are using. |
| 14 | + |
| 15 | +# Intel Architecture (IA) CPU |
| 16 | +DeepSpeed supports CPU with Intel Architecture instruction set. It is recommended to have the CPU support at least AVX2 instruction set and recommend AMX instruction set. |
| 17 | + |
| 18 | +DeepSpeed has been verified on the following CPU processors: |
| 19 | +* 4th Gen Intel® Xeon® Scalarable Processors |
| 20 | +* 5th Gen Intel® Xeon® Scalarable Processors |
| 21 | +* 6th Gen Intel® Xeon® Scalarable Processors |
| 22 | + |
| 23 | +## Installation steps for Intel Architecture CPU |
| 24 | +To install DeepSpeed on Intel Architecture CPU, use the following steps: |
| 25 | +1. Install gcc compiler |
| 26 | +DeepSpeed requires gcc-9 or above to build kernels on Intel Architecture CPU, install gcc-9 or above. |
| 27 | + |
| 28 | +2. Install numactl |
| 29 | +DeepSpeed use `numactl` for fine grain CPU core allocation for load-balancing, install numactl on your system. |
| 30 | +For example, on Ubuntu system, use the following command: |
| 31 | +`sudo apt-get install numactl` |
| 32 | + |
| 33 | +3. Install PyTorch |
| 34 | +`pip install torch` |
| 35 | + |
| 36 | +4. Install DeepSpeed |
| 37 | +`pip install deepspeed` |
| 38 | + |
| 39 | +## How to launch DeepSpeed on Intel Architecture CPU |
| 40 | +DeepSpeed can launch on Intel Architecture CPU with default deepspeed command. However, for compute intensive workloads, Intel Architecture CPU works best when each worker process runs on different set of physical CPU cores, so worker process does not compete CPU cores with each other. To bind cores to each worker (rank), use the following command line switch for better performance. |
| 41 | +``` |
| 42 | +deepspeed --bind_cores_to_rank <deepspeed-model-script> |
| 43 | +``` |
| 44 | +This switch would automatically detect the number of CPU NUMA node on the host, launch the same number of workers, and bind each worker to cores/memory of a different NUMA node. This improves performance by ensuring workers do not interfere with each other, and that all memory allocation is from local memory. |
| 45 | + |
| 46 | +If a user wishes to have more control on the number of workers and specific cores that can be used by the workload, user can use the following command line switches. |
| 47 | +``` |
| 48 | +deepspeed --num_accelerators <number-of-workers> --bind_cores_to_rank --bind_core_list <comma-seperated-dash-range> <deepspeed-model-script> |
| 49 | +``` |
| 50 | +For example: |
| 51 | +``` |
| 52 | +deepspeed --num_accelerators 4 --bind_cores_to_rank --bind_core_list <0-27,32-59> inference.py |
| 53 | +``` |
| 54 | +This would start 4 workers for the workload. The core list range will be divided evenly between 4 workers, with worker 0 take 0-13, worker 1, take 14-27, worker 2 take 32-45, and worker 3 take 46-59. Core 28-31,60-63 are left out because there might be some background process running on the system, leaving some idle cores will reduce performance jitting and straggler effect. |
| 55 | + |
| 56 | +Launching DeepSpeed model on multiple CPU nodes is similar to other accelerators. We need to specify `impi` as launcher and specify `--bind_cores_to_rank` for better core binding. Also specify `slots` number according to number of CPU sockets in host file. |
| 57 | + |
| 58 | +``` |
| 59 | +# hostfile content should follow the format |
| 60 | +# worker-1-hostname slots=<#sockets> |
| 61 | +# worker-2-hostname slots=<#sockets> |
| 62 | +# ... |
| 63 | +
|
| 64 | +deepspeed --hostfile=<hostfile> --bind_cores_to_rank --launcher impi --master_addr <master-ip> <deepspeed-model-script> |
| 65 | +``` |
| 66 | + |
| 67 | +## Install with Intel Extension for PyTorch and oneCCL |
| 68 | +Although not mandatory, Intel Extension for PyTorch and Intel oneCCL provide better optimizations for LLM models. Intel oneCCL also provide optimization when running LLM model on multi-node. To use DeepSpeed with Intel Extension for PyTorch and oneCCL, use the following steps: |
| 69 | +1. Install Intel Extension for PyTorch. This is suggested if you want to get better LLM inference performance on CPU. |
| 70 | +`pip install intel-extension-for-pytorch` |
| 71 | + |
| 72 | +The following steps are to install oneCCL binding for PyTorch. This is suggested if you are running DeepSpeed on multiple CPU node, for better communication performance. On single node with multiple CPU socket, these steps are not needed. |
| 73 | + |
| 74 | +2. Install Intel oneCCL binding for PyTorch |
| 75 | +`python -m pip install oneccl_bind_pt -f https://developer.intel.com/ipex-whl-stable-cpu` |
| 76 | + |
| 77 | +3. Install Intel oneCCL, this will be used to build direct oneCCL kernels (CCLBackend kernels) |
| 78 | +``` |
| 79 | +pip install oneccl-devel |
| 80 | +pip install impi-devel |
| 81 | +``` |
| 82 | +Then set the environment variables for Intel oneCCL (assuming using conda environment). |
| 83 | +``` |
| 84 | +export CPATH=${CONDA_PREFIX}/include:$CPATH |
| 85 | +export CCL_ROOT=${CONDA_PREFIX} |
| 86 | +export I_MPI_ROOT=${CONDA_PREFIX} |
| 87 | +export LD_LIBRARY_PATH=${CONDA_PREFIX}/lib/ccl/cpu:${CONDA_PREFIX}/lib/libfabric:${CONDA_PREFIX}/lib |
| 88 | +``` |
| 89 | + |
| 90 | +## Optimize LLM inference with Intel Extension for PyTorch |
| 91 | +Intel Extension for PyTorch compatible with DeepSpeed AutoTP tensor parallel inference. It allows CPU inference to benefit from both DeepSpeed Automatic Tensor Parallelism, and LLM optimizations of Intel Extension for PyTorch. To use Intel Extension for PyTorch, after calling deepspeed.init_inference, call |
| 92 | +``` |
| 93 | +ipex_model = ipex.llm.optimize(deepspeed_model) |
| 94 | +``` |
| 95 | +to get model optimzied by Intel Extension for PyTorch. |
| 96 | + |
| 97 | +## More example for using DeepSpeed with Intel Extension for PyTorch on Intel Architecture CPU |
| 98 | +Refer to https://github.com/intel/intel-extension-for-pytorch/tree/main/examples/cpu/inference/python/llm for more extensive guide. |
| 99 | + |
| 100 | +# Intel XPU |
| 101 | +DeepSpeed XPU accelerator supports Intel® Data Center GPU Max Series. |
| 102 | + |
| 103 | +DeepSpeed has been verified on the following GPU products: |
| 104 | +* Intel® Data Center GPU Max 1100 |
| 105 | +* Intel® Data Center GPU Max 1550 |
| 106 | + |
| 107 | +## Installation steps for Intel XPU |
| 108 | +To install DeepSpeed on Intel XPU, use the following steps: |
| 109 | +1. Install oneAPI base toolkit \ |
| 110 | +The Intel® oneAPI Base Toolkit (Base Kit) is a core set of tools and libraries, including an DPC++/C++ Compiler for building Deepspeed XPU kernels like fusedAdam and CPUAdam, high performance computation libraries demanded by IPEX, etc. |
| 111 | +For easy download, usage and more details, check [Intel oneAPI base-toolkit](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit.html). |
| 112 | +2. Install PyTorch, Intel extension for pytorch, Intel oneCCL Bindings for PyTorch. These packages are required in `xpu_accelerator` for torch functionality and performance, also communication backend on Intel platform. The recommended installation reference: |
| 113 | +https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=gpu. |
| 114 | + |
| 115 | +3. Install DeepSpeed \ |
| 116 | +`pip install deepspeed` |
| 117 | + |
| 118 | +## How to use DeepSpeed on Intel XPU |
| 119 | +DeepSpeed can be launched on Intel XPU with deepspeed launch command. Before that, user needs activate the oneAPI environment by: \ |
| 120 | +`source <oneAPI installed path>/setvars.sh` |
| 121 | + |
| 122 | +To validate the XPU availability and if the XPU accelerator is correctly chosen, here is an example: |
| 123 | +``` |
| 124 | +$ python |
| 125 | +>>> import torch; print('torch:', torch.__version__) |
| 126 | +torch: 2.3.0 |
| 127 | +>>> import intel_extension_for_pytorch; print('XPU available:', torch.xpu.is_available()) |
| 128 | +XPU available: True |
| 129 | +>>> from deepspeed.accelerator import get_accelerator; print('accelerator:', get_accelerator()._name) |
| 130 | +accelerator: xpu |
| 131 | +``` |
| 132 | + |
| 133 | +## More example for using DeepSpeed on Intel XPU |
| 134 | +Refer to https://github.com/intel/intel-extension-for-pytorch/tree/release/xpu/2.1.40/examples/gpu/inference/python/llm for more extensive guide. |
0 commit comments