Skip to content

Commit c885710

Browse files
committed
[Doc] Add install doc
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
1 parent 46977f9 commit c885710

File tree

3 files changed

+136
-24
lines changed

3 files changed

+136
-24
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ By using vLLM Ascend plugin, popular open-source models, including Transformer-l
3939
* PyTorch >= 2.4.0, torch-npu >= 2.4.0
4040
* vLLM (the same version as vllm-ascend)
4141

42-
Find more about how to setup your environment step by step in [here](docs/installation.md).
42+
Find more about how to setup your environment step by step in [here](docs/source/installation.md).
4343

4444
## Getting Started
4545

@@ -68,7 +68,7 @@ Run the following command to start the vLLM server with the [Qwen/Qwen2.5-0.5B-I
6868
vllm serve Qwen/Qwen2.5-0.5B-Instruct
6969
curl http://localhost:8000/v1/models
7070
```
71-
**Please refer to [official docs](./docs/index.md) for more details.**
71+
**Please refer to [official docs](https://vllm-ascend.readthedocs.io/en/latest/) for more details.**
7272

7373
## Contributing
7474
See [CONTRIBUTING](docs/source/developer_guide/contributing.md) for more details, which is a step-by-step guide to help you set up development environment, build and test.

README.zh.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ vLLM 昇腾插件 (`vllm-ascend`) 是一个让vLLM在Ascend NPU无缝运行的
3939
* PyTorch >= 2.4.0, torch-npu >= 2.4.0
4040
* vLLM (与vllm-ascend版本一致)
4141

42-
[此处](docs/installation.md),您可以了解如何逐步准备环境。
42+
[此处](docs/source/installation.md),您可以了解如何逐步准备环境。
4343

4444
## 开始使用
4545

@@ -69,7 +69,7 @@ vllm serve Qwen/Qwen2.5-0.5B-Instruct
6969
curl http://localhost:8000/v1/models
7070
```
7171

72-
**请参阅 [官方文档](./docs/index.md)以获取更多详细信息**
72+
**请参阅 [官方文档](https://vllm-ascend.readthedocs.io/en/latest/)以获取更多详细信息**
7373

7474
## 贡献
7575
有关更多详细信息,请参阅 [CONTRIBUTING](docs/source/developer_guide/contributing.zh.md),可以更详细的帮助您部署开发环境、构建和测试。

docs/source/installation.md

Lines changed: 132 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -1,22 +1,49 @@
11
# Installation
22

3-
## Dependencies
4-
| Requirement | Supported version | Recommended version | Note |
5-
| ------------ | ------- | ----------- | ----------- |
6-
| Python | >= 3.9 | [3.10](https://www.python.org/downloads/) | Required for vllm |
7-
| CANN | >= 8.0.RC2 | [8.0.RC3](https://www.hiascend.com/developer/download/community/result?module=cann&cann=8.0.0.beta1) | Required for vllm-ascend and torch-npu |
8-
| torch-npu | >= 2.4.0 | [2.5.1rc1](https://gitee.com/ascend/pytorch/releases/tag/v6.0.0.alpha001-pytorch2.5.1) | Required for vllm-ascend |
9-
| torch | >= 2.4.0 | [2.5.1](https://github.com/pytorch/pytorch/releases/tag/v2.5.1) | Required for torch-npu and vllm required |
3+
This document describes how to install vllm-ascend manually.
104

11-
## Prepare Ascend NPU environment
5+
## Requirements
126

13-
Below is a quick note to install recommended version software:
7+
- OS: Linux
8+
- Python: 3.10 or higher
9+
- A hardware with Ascend NPU. It's usually the Atlas 800 A2 series.
10+
- Software:
1411

15-
### Containerized installation
12+
| Software | Supported version | Note |
13+
| ------------ | ----------------- | ---- |
14+
| CANN | >= 8.0.0.beta1 | Required for vllm-ascend and torch-npu |
15+
| torch-npu | >= 2.5.1rc1 | Required for vllm-ascend |
16+
| torch | >= 2.5.1 | Required for torch-npu and vllm |
1617

17-
You can use the [container image](https://hub.docker.com/r/ascendai/cann) directly with one line command:
18+
## Configure a new environment
19+
20+
Before installing the package, you need to make sure that the firmware and driver for NPU is installed correctly. i.e. `npu-smi` command is available.
21+
22+
> Tips: following the instructions provided in the [Ascend Installation Guide](https://ascend.github.io/docs/sources/ascend/quick_install.html) can help you to set up the environment easily.
23+
24+
Once it's done, you can choice either **Set up using Python** or **Set up using Docker** section to install and use vllm-ascend.
25+
26+
If you want to install vllm-ascend in local bare environment by hand, you need install CANN first, otherwise, you can skip this step.
1827

1928
```bash
29+
# Create a virtual environment
30+
python -m venv vllm-ascend-env
31+
source vllm-ascend-env/bin/activate
32+
# Install python packages.
33+
pip3 install -i https://pypi.tuna.tsinghua.edu.cn/simple attrs numpy==1.24.0 decorator sympy cffi pyyaml pathlib2 psutil protobuf scipy requests absl-py wheel typing_extensions
34+
# Download and install the CANN package from the official website.
35+
wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/CANN/CANN%208.0.0/Ascend-cann-toolkit_8.0.0_linux-aarch64.run
36+
sh Ascend-cann-toolkit_8.0.0_linux-aarch64.run --full
37+
wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/CANN/CANN%208.0.0/Ascend-cann-kernels-910b_8.0.0_linux-aarch64.run
38+
sh Ascend-cann-kernels-910b_8.0.0_linux-aarch64.run --full
39+
```
40+
41+
## Set up using Python
42+
43+
First of all, make sure you have an environment with CANN installed. It can be done by **Configure a new environment** step. Or by using an CANN container directly.
44+
45+
```
46+
# Setup a CANN container using docker
2047
docker run \
2148
--name vllm-ascend-env \
2249
--device /dev/davinci1 \
@@ -28,28 +55,113 @@ docker run \
2855
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
2956
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
3057
-v /etc/ascend_install.info:/etc/ascend_install.info \
31-
-it quay.io/ascend/cann:8.0.rc3.beta1-910b-ubuntu22.04-py3.10 bash
58+
-it quay.io/ascend/cann:8.0.0.beta1-910b-ubuntu22.04-py3.10 bash
3259
```
3360

34-
You do not need to install `torch` and `torch_npu` manually, they will be automatically installed as `vllm-ascend` dependencies.
61+
Then you need to install vllm from souce code.
3562

36-
### Manual installation
63+
```bash
64+
git clone https://github.com/vllm-project/vllm
65+
cd vllm
66+
VLLM_TARGET_DEVICE=empty python setup.py install
67+
```
3768

38-
Or follow the instructions provided in the [Ascend Installation Guide](https://ascend.github.io/docs/sources/ascend/quick_install.html) to set up the environment.
69+
Then you can install vllm-ascend pre-built wheel or from source code.
3970

40-
## Building
71+
### Pre-built wheels (Not support yet)
4172

42-
### Build Python package from source
73+
```bash
74+
pip install vllm-ascend -f https://download.pytorch.org/whl/torch/
75+
```
76+
77+
### Build wheel from source
4378

4479
```bash
4580
git clone https://github.com/vllm-project/vllm-ascend.git
4681
cd vllm-ascend
47-
pip install -e .
82+
pip install -e . -f https://download.pytorch.org/whl/torch/
83+
```
84+
85+
## Set up using Docker
86+
87+
> Tips: CANN, torch, torch_npu, vllm and vllm_ascend are pre-installed in the Docker image already.
88+
89+
### Pre-built images (Not support yet)
90+
91+
Just pull the image and run it with bash.
92+
93+
```bash
94+
docker pull quay.io/ascend/vllm-ascend:latest
95+
96+
docker run \
97+
--name vllm-ascend-env \
98+
--device /dev/davinci1 \
99+
--device /dev/davinci_manager \
100+
--device /dev/devmm_svm \
101+
--device /dev/hisi_hdc \
102+
-v /usr/local/dcmi:/usr/local/dcmi \
103+
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
104+
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
105+
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
106+
-v /etc/ascend_install.info:/etc/ascend_install.info \
107+
-it quay.io/ascend/vllm-ascend:0.7.1rc1 bash
48108
```
49109

50-
### Build container image from source
110+
### Build image from source
111+
112+
If you want to build the docker image from main branch, you can do it by following steps:
113+
51114
```bash
52115
git clone https://github.com/vllm-project/vllm-ascend.git
53116
cd vllm-ascend
54-
docker build -t vllm-ascend-dev-image -f ./Dockerfile .
117+
118+
docker build -t vllm-ascend-dev-image:latest -f ./Dockerfile .
119+
120+
docker run \
121+
--name vllm-ascend-env \
122+
--device /dev/davinci1 \
123+
--device /dev/davinci_manager \
124+
--device /dev/devmm_svm \
125+
--device /dev/hisi_hdc \
126+
-v /usr/local/dcmi:/usr/local/dcmi \
127+
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
128+
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
129+
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
130+
-v /etc/ascend_install.info:/etc/ascend_install.info \
131+
-it vllm-ascend-dev-image:latest bash
132+
```
133+
134+
## Extra information
135+
136+
### Verify installation
137+
138+
Create and Run a simple inference test. Then `example.py` is:
139+
140+
```
141+
from vllm import LLM, SamplingParams
142+
143+
prompts = [
144+
"Hello, my name is",
145+
"The president of the United States is",
146+
"The capital of France is",
147+
"The future of AI is",
148+
]
149+
150+
# Create a sampling params object.
151+
sampling_params = SamplingParams(max_tokens=100, temperature=0.0)
152+
# Create an LLM.
153+
llm = LLM(model="facebook/opt-125m")
154+
155+
# Generate texts from the prompts.
156+
outputs = llm.generate(prompts, sampling_params)
157+
for output in outputs:
158+
prompt = output.prompt
159+
generated_text = output.outputs[0].text
160+
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
161+
```
162+
163+
Then run:
164+
```bash
165+
# export VLLM_USE_MODELSCOPE=true to speed up download if huggingface is not reachable.
166+
python example.py
55167
```

0 commit comments

Comments
 (0)