Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge pull request #1800 from TeslaZhao/develop #1801

Merged
merged 1 commit into from
May 24, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
58 changes: 53 additions & 5 deletions doc/Install_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,16 @@

- [1.使用开发镜像](#1)
- [Serving 开发镜像](#1.1)
- [CPU 镜像](#1.1.1)
- [GPU 镜像](#1.1.2)
- [ARM & XPU 镜像](#1.1.3)
- [Paddle 开发镜像](#1.2)
- [CPU 镜像](#1.2.1)
- [GPU 镜像](#1.2.2)
- [2.安装 Wheel 包](#2)
- [在线安装](#2.1)
- [离线安装](#2.2)
- [ARM & XPU 包安装](#2.3)
- [3.环境检查](#3)


Expand All @@ -33,33 +39,52 @@
| CUDA10.2 + cuDNN 7 | 0.9.0-cuda10.2-cudnn7-devel | Ubuntu 16 | 2.3.0-gpu-cuda10.2-cudnn7 | Ubuntu 18
| CUDA10.2 + cuDNN 8 | 0.9.0-cuda10.2-cudnn8-devel | Ubuntu 16 || Ubuntu 18 |
| CUDA11.2 + cuDNN 8 | 0.9.0-cuda11.2-cudnn8-devel | Ubuntu 16 | 2.3.0-gpu-cuda11.2-cudnn8 | Ubuntu 18 |
| ARM + XPU | xpu-arm | CentOS 8.3 |||

对于**Windows 10 用户**,请参考文档[Windows平台使用Paddle Serving指导](Windows_Tutorial_CN.md)


<a name="1.1"></a>

### 1.1 Serving开发镜像(CPU/GPU 2选1)

<a name="1.1.1"></a>

**CPU:**
```
# 启动 CPU Docker
docker pull registry.baidubce.com/paddlepaddle/serving:0.9.0-devel
docker run -p 9292:9292 --name test -dit registry.baidubce.com/paddlepaddle/serving:0.9.0-devel bash
docker exec -it test bash
docker run -p 9292:9292 --name test_cpu -dit registry.baidubce.com/paddlepaddle/serving:0.9.0-devel bash
docker exec -it test_cpu bash
git clone https://github.com/PaddlePaddle/Serving
```

<a name="1.1.2"></a>

**GPU:**
```
# 启动 GPU Docker
docker pull registry.baidubce.com/paddlepaddle/serving:0.9.0-cuda11.2-cudnn8-devel
nvidia-docker run -p 9292:9292 --name test -dit registry.baidubce.com/paddlepaddle/serving:0.9.0-cuda11.2-cudnn8-devel bash
nvidia-docker exec -it test bash
nvidia-docker run -p 9292:9292 --name test_gpu -dit registry.baidubce.com/paddlepaddle/serving:0.9.0-cuda11.2-cudnn8-devel bash
nvidia-docker exec -it test_gpu bash
git clone https://github.com/PaddlePaddle/Serving
```

<a name="1.1.3"></a>

**ARM & XPU: **
```
docker pull registry.baidubce.com/paddlepaddle/serving:xpu-arm
docker run -p 9292:9292 --name test_arm_xpu -dit registry.baidubce.com/paddlepaddle/serving:xpu-arm bash
docker exec -it test_arm_xpu bash
git clone https://github.com/PaddlePaddle/Serving
```

<a name="1.2"></a>

### 1.2 Paddle开发镜像(CPU/GPU 2选1)

<a name="1.2.1"></a>

**CPU:**
```
### 启动 CPU Docker
Expand All @@ -71,6 +96,9 @@ git clone https://github.com/PaddlePaddle/Serving
### Paddle开发镜像需要执行以下脚本增加Serving所需依赖项
bash Serving/tools/paddle_env_install.sh
```

<a name="1.2.2"></a>

**GPU:**
```
### 启动 GPU Docker
Expand Down Expand Up @@ -103,6 +131,7 @@ pip3 install -r python/requirements.txt
<a name="2.1"></a>

### 2.1 在线安装
在线安装采用 `pypi` 下载并安装的方式。

```shell
pip3 install paddle-serving-client==0.9.0 -i https://pypi.tuna.tsinghua.edu.cn/simple
Expand Down Expand Up @@ -158,6 +187,7 @@ pip3 install https://paddle-inference-lib.bj.bcebos.com/2.3.0/python/Linux/GPU/x
<a name="2.2"></a>

### 2.2 离线安装
离线安装是指所有的 Paddle 和 Serving 包和依赖库,传入到无网或弱网环境下安装。

**1.安装离线 Wheel 包**

Expand Down Expand Up @@ -223,6 +253,24 @@ python3 install.py --cuda_version="" --python_version="py39" --device="cpu" --se
python3 install.py --cuda_version="112" --python_version="py36" --device="GPU" --serving_version="no_install" --paddle_version="2.3.0"
```

<a name="2.3"></a>

### 2.3 ARM & XPU 安装 wheel 包

由于使用 ARM 和 XPU 的用户较少,安装此环境的 Wheel 单独提供如下,其中 `paddle_serving_client` 仅提供 `py36` 的版本,如需其他版本请与我们联系。

```
pip3.6 install https://paddle-serving.bj.bcebos.com/test-dev/whl/arm/paddle_serving_app-0.9.0-py3-none-any.whl
pip3.6 install https://paddle-serving.bj.bcebos.com/test-dev/whl/arm/paddle_serving_client-0.9.0-cp36-none-any.whl
pip3.6 install https://paddle-serving.bj.bcebos.com/test-dev/whl/arm/paddle_serving_server_xpu-0.9.0.post2-py3-none-any.whl
```

二进制包地址:
```
wget https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-xpu-aarch64-0.9.0.tar.gz
```


<a name="3"></a>

## 3.环境检查
Expand Down
73 changes: 60 additions & 13 deletions doc/Install_EN.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,16 @@

- [1.Use devel docker](#1)
- [Serving devel images](#1.1)
- [CPU images](#1.1.1)
- [GPU images](#1.1.2)
- [ARM & XPU images](#1.1.3)
- [Paddle devel images](#1.2)
- [CPU images](#1.2.1)
- [GPU images](#1.2.2)
- [2.Install Wheel Packages](#2)
- [Online Install](#2.1)
- [Offline Install](#2.2)
- [ARM & XPU Install](#2.3)
- [3.Installation Check](#3)

**Strongly recommend** you build **Paddle Serving** in Docker. For more images, please refer to [Docker Image List](Docker_Images_CN.md).
Expand All @@ -28,6 +34,7 @@
| CUDA10.2 + cuDNN 7 | 0.9.0-cuda10.2-cudnn7-devel | Ubuntu 16 | 2.3.0-gpu-cuda10.2-cudnn7 | Ubuntu 18
| CUDA10.2 + cuDNN 8 | 0.9.0-cuda10.2-cudnn8-devel | Ubuntu 16 | None | None |
| CUDA11.2 + cuDNN 8 | 0.9.0-cuda11.2-cudnn8-devel | Ubuntu 16 | 2.3.0-gpu-cuda11.2-cudnn8 | Ubuntu 18 |
| ARM + XPU | xpu-arm | CentOS 8.3 | None | None |

For **Windows 10 users**, please refer to the document [Paddle Serving Guide for Windows Platform](Windows_Tutorial_CN.md).

Expand All @@ -36,46 +43,68 @@ For **Windows 10 users**, please refer to the document [Paddle Serving Guide for

### 1.1 Serving Devel Images (CPU/GPU 2 choose 1)

<a name="1.1.1"></a>

**CPU:**
```
# Start CPU Docker Container
docker pull registry.baidubce.com/paddlepaddle/serving:0.9.0-devel
docker run -p 9292:9292 --name test -dit registry.baidubce.com/paddlepaddle/serving:0.9.0-devel bash
docker exec -it test bash
docker run -p 9292:9292 --name test_cpu -dit registry.baidubce.com/paddlepaddle/serving:0.9.0-devel bash
docker exec -it test_cpu bash
git clone https://github.com/PaddlePaddle/Serving
```

<a name="1.1.2"></a>

**GPU:**
```
# Start GPU Docker Container
docker pull registry.baidubce.com/paddlepaddle/serving:0.9.0-cuda11.2-cudnn7-devel
nvidia-docker run -p 9292:9292 --name test -dit docker pull registry.baidubce.com/paddlepaddle/serving:0.9.0-cuda11.2-cudnn7-devel bash
nvidia-docker exec -it test bash
nvidia-docker run -p 9292:9292 --name test_gpu -dit docker pull registry.baidubce.com/paddlepaddle/serving:0.9.0-cuda11.2-cudnn7-devel bash
nvidia-docker exec -it test_gpu bash
git clone https://github.com/PaddlePaddle/Serving
```

<a name="1.1.3"></a>

**ARM & XPU: **
```
docker pull registry.baidubce.com/paddlepaddle/serving:xpu-arm
docker run -p 9292:9292 --name test_arm_xpu -dit registry.baidubce.com/paddlepaddle/serving:xpu-arm bash
docker exec -it test_arm_xpu bash
git clone https://github.com/PaddlePaddle/Serving
```

<a name="1.2"></a>

### 1.2 Paddle Devel Images (choose any codeblock of CPU/GPU)

<a name="1.2.1"></a>

**CPU:**
```
```shell
# Start CPU Docker Container
docker pull registry.baidubce.com/paddlepaddle/paddle:2.3.0
docker run -p 9292:9292 --name test -dit registry.baidubce.com/paddlepaddle/paddle:2.3.0 bash
docker exec -it test bash
docker run -p 9292:9292 --name test_cpu -dit registry.baidubce.com/paddlepaddle/paddle:2.3.0 bash
docker exec -it test_cpu bash
git clone https://github.com/PaddlePaddle/Serving

# Paddle dev image needs to run the following script to increase the dependencies required by Serving
### Paddle dev image needs to run the following script to increase the dependencies required by Serving
bash Serving/tools/paddle_env_install.sh
```

<a name="1.2.2"></a>

**GPU:**
```
# Start GPU Docker

```shell
### Start GPU Docker
nvidia-docker pull registry.baidubce.com/paddlepaddle/paddle:2.3.0-gpu-cuda11.2-cudnn8
nvidia-docker run -p 9292:9292 --name test -dit registry.baidubce.com/paddlepaddle/paddle:2.3.0-gpu-cuda11.2-cudnn8 bash
nvidia-docker exec -it test bash
nvidia-docker run -p 9292:9292 --name test_gpu -dit registry.baidubce.com/paddlepaddle/paddle:2.3.0-gpu-cuda11.2-cudnn8 bash
nvidia-docker exec -it test_gpu bash
git clone https://github.com/PaddlePaddle/Serving

# Paddle development image needs to execute the following script to increase the dependencies required by Serving
### Paddle development image needs to execute the following script to increase the dependencies required by Serving
bash Serving/tools/paddle_env_install.sh
```

Expand All @@ -98,6 +127,7 @@ Install the service whl package. There are three types of client, app and server
<a name="2.1"></a>

### 2.1 Online Install
Online installation uses `pypi` to download and install.

```shell
pip3 install paddle-serving-client==0.9.0 -i https://pypi.tuna.tsinghua.edu.cn/simple
Expand Down Expand Up @@ -152,6 +182,7 @@ pip3 install https://paddle-inference-lib.bj.bcebos.com/2.3.0/python/Linux/GPU/x
<a name="2.2"></a>

### 2.2 Offline Install
Offline installation is to download all Paddle and Serving packages and dependent libraries, and install them in a no-network or weak-network environment.

**1.Install offline wheel packages**

Expand Down Expand Up @@ -210,6 +241,22 @@ python3 install.py --cuda_version="" --python_version="py39" --device="cpu" --se
python3 install.py --cuda_version="112" --python_version="py36" --device="GPU" --serving_version="no_install" --paddle_version="2.3.0"
```

<a name="2.3"></a>

### 2.3 ARM & XPU Install

Since there are few users using ARM and XPU, the Wheel for installing this environment is provided separately as follows, among which `paddle_serving_client` only provides the `py36` version, if you need other versions, please contact us.
```
pip3.6 install https://paddle-serving.bj.bcebos.com/test-dev/whl/arm/paddle_serving_app-0.9.0-py3-none-any.whl
pip3.6 install https://paddle-serving.bj.bcebos.com/test-dev/whl/arm/paddle_serving_client-0.9.0-cp36-none-any.whl
pip3.6 install https://paddle-serving.bj.bcebos.com/test-dev/whl/arm/paddle_serving_server_xpu-0.9.0.post2-py3-none-any.whl
```

Download binary package address:
```
wget https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-xpu-aarch64-0.9.0.tar.gz
```

<a name="3"></a>

## 3.Installation Check
Expand Down
17 changes: 9 additions & 8 deletions doc/Latest_Packages_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,22 +48,23 @@
### Wheel包链接

适用ARM CPU环境的昆仑Wheel包:
```

```shell
# paddle-serving-server
https://paddle-serving.bj.bcebos.com/whl/xpu/arm/paddle_serving_server_xpu-0.0.0.post2-py3-none-any.whl
wget https://paddle-serving.bj.bcebos.com/test-dev/whl/arm/paddle_serving_app-0.9.0-py3-none-any.whl
# paddle-serving-client
https://paddle-serving.bj.bcebos.com/whl/xpu/arm/paddle_serving_client-0.0.0-cp36-none-any.whl
wget https://paddle-serving.bj.bcebos.com/test-dev/whl/arm/paddle_serving_client-0.9.0-cp36-none-any.whl
# paddle-serving-app
https://paddle-serving.bj.bcebos.com/whl/xpu/arm/paddle_serving_app-0.0.0-py3-none-any.whl
wget https://paddle-serving.bj.bcebos.com/test-dev/whl/arm/paddle_serving_server_xpu-0.9.0.post2-py3-none-any.whl

# SERVING BIN
https://paddle-serving.bj.bcebos.com/bin/serving-xpu-aarch64-0.0.0.tar.gz
wget https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-xpu-aarch64-0.9.0.tar.gz
```

适用于x86 CPU环境的昆仑Wheel包
```
适用于ARM & XPU 的 v0.9.0 版本 Wheel包
```shell
https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server_xpu-0.9.0.post2-py3-none-any.whl
```



17 changes: 8 additions & 9 deletions doc/Latest_Packages_EN.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,23 +46,22 @@ for kunlun user who uses arm-xpu or x86-xpu can download the wheel packages as f

### Wheel Package Links

for arm kunlun user
```
for arm kunlun user,
```shell
# paddle-serving-server
https://paddle-serving.bj.bcebos.com/whl/xpu/arm/paddle_serving_server_xpu-0.0.0.post2-py3-none-any.whl
wget https://paddle-serving.bj.bcebos.com/test-dev/whl/arm/paddle_serving_app-0.9.0-py3-none-any.whl
# paddle-serving-client
https://paddle-serving.bj.bcebos.com/whl/xpu/arm/paddle_serving_client-0.0.0-cp36-none-any.whl
wget https://paddle-serving.bj.bcebos.com/test-dev/whl/arm/paddle_serving_client-0.9.0-cp36-none-any.whl
# paddle-serving-app
https://paddle-serving.bj.bcebos.com/whl/xpu/arm/paddle_serving_app-0.0.0-py3-none-any.whl
wget https://paddle-serving.bj.bcebos.com/test-dev/whl/arm/paddle_serving_server_xpu-0.9.0.post2-py3-none-any.whl

# SERVING BIN
https://paddle-serving.bj.bcebos.com/bin/serving-xpu-aarch64-0.0.0.tar.gz
wget https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-xpu-aarch64-0.9.0.tar.gz
```

for x86 kunlun user
```
for x86 xpu user, the wheel packages are here.
```shell
https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server_xpu-0.9.0.post2-py3-none-any.whl
```