Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Add docker files #67

Merged
merged 18 commits into from
Jan 25, 2022
101 changes: 101 additions & 0 deletions docker/CPU/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
FROM openvino/ubuntu18_dev:2021.4.2
ARG PYTHON_VERSION=3.7
ARG TORCH_VERSION=1.8.0
ARG TORCHVISION_VERSION=0.9.0
ARG ONNXRUNTIME_VERSION=1.8.1
USER root
RUN apt-get update && apt-get install -y --no-install-recommends \
ca-certificates \
libopencv-dev libspdlog-dev \
gnupg \
libssl-dev \
libprotobuf-dev protobuf-compiler \
build-essential \
libjpeg-dev \
libpng-dev \
ccache \
cmake \
gcc \
RunningLeon marked this conversation as resolved.
Show resolved Hide resolved
g++ \
git \
vim \
wget \
curl \
&& rm -rf /var/lib/apt/lists/*

RUN curl -fsSL -v -o ~/miniconda.sh -O https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh && \
chmod +x ~/miniconda.sh && \
~/miniconda.sh -b -p /opt/conda && \
rm ~/miniconda.sh && \
/opt/conda/bin/conda install -y python=${PYTHON_VERSION} conda-build pyyaml numpy ipython cython typing typing_extensions mkl mkl-include ninja && \
/opt/conda/bin/conda clean -ya

### pytorch
RUN /opt/conda/bin/pip install torch==${TORCH_VERSION}+cpu torchvision==${TORCHVISION_VERSION}+cpu -f https://download.pytorch.org/whl/cpu/torch_stable.html
ENV PATH /opt/conda/bin:$PATH

### install open-mim
RUN /opt/conda/bin/pip install mmcv-full==1.4.0 -f https://download.openmmlab.com/mmcv/dist/cpu/torch${TORCH_VERSION}/index.html
AllentDan marked this conversation as resolved.
Show resolved Hide resolved

WORKDIR /root/workspace
RUN git clone https://github.com/open-mmlab/mmclassification


### get onnxruntime
RUN wget https://github.com/microsoft/onnxruntime/releases/download/v${ONNXRUNTIME_VERSION}/onnxruntime-linux-x64-${ONNXRUNTIME_VERSION}.tgz \
&& tar -zxvf onnxruntime-linux-x64-${ONNXRUNTIME_VERSION}.tgz

ENV ONNXRUNTIME_DIR=/root/workspace/onnxruntime-linux-x64-${ONNXRUNTIME_VERSION}

### update cmake to 20
RUN wget https://github.com/Kitware/CMake/releases/download/v3.20.0/cmake-3.20.0.tar.gz &&\
tar -zxvf cmake-3.20.0.tar.gz &&\
cd cmake-3.20.0 &&\
./bootstrap &&\
make &&\
make install

### install onnxruntme and openvino
RUN /opt/conda/bin/pip install onnxruntime==${ONNXRUNTIME_VERSION} openvino-dev

### build ncnn
RUN git clone https://github.com/Tencent/ncnn.git &&\
cd ncnn &&\
export NCNN_DIR=$(pwd) &&\
git submodule update --init &&\
mkdir -p build && cd build &&\
cmake -DNCNN_VULKAN=OFF -DNCNN_SYSTEM_GLSLANG=ON -DNCNN_BUILD_EXAMPLES=ON -DNCNN_PYTHON=ON -DNCNN_BUILD_TOOLS=ON -DNCNN_BUILD_BENCHMARK=ON -DNCNN_BUILD_TESTS=ON .. &&\
make install &&\
cd /root/workspace/ncnn/python &&\
pip install -e .
RunningLeon marked this conversation as resolved.
Show resolved Hide resolved

### install mmdeploy
WORKDIR /root/workspace
ARG VERSION
RUN git clone https://github.com/open-mmlab/mmdeploy.git &&\
cd mmdeploy &&\
if [ -z ${VERSION} ] ; then echo "No MMDeploy version passed in, building on master" ; else git checkout tags/v${VERSION} -b tag_v${VERSION} ; fi &&\
git submodule update --init --recursive &&\
rm -rf build &&\
mkdir build &&\
cd build &&\
cmake -DMMDEPLOY_TARGET_BACKENDS=ncnn -Dncnn_DIR=/root/workspace/ncnn/build/install/lib/cmake/ncnn .. &&\
make -j$(nproc) &&\
cmake -DMMDEPLOY_TARGET_BACKENDS=ort .. &&\
make -j$(nproc) &&\
cd .. &&\
pip install -e .

### build SDK
RUN cd mmdeploy && rm -rf build/CM* && mkdir -p build && cd build && cmake .. \
-DMMDEPLOY_BUILD_SDK=ON \
-DCMAKE_CXX_COMPILER=g++-7 \
-DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} \
-Dncnn_DIR=/root/workspace/ncnn/build/install/lib/cmake/ncnn \
-DInferenceEngine_DIR=/opt/intel/openvino/deployment_tools/inference_engine/share \
-DMMDEPLOY_TARGET_DEVICES=cpu \
-DMMDEPLOY_BUILD_SDK_PYTHON_API=ON \
-DMMDEPLOY_TARGET_BACKENDS="ort;ncnn;openvino" \
-DMMDEPLOY_CODEBASES=all &&\
cmake --build . -- -j$(nproc) && cmake --install . &&\
if [ -z ${VERSION} ] ; then echo "Build MMDeploy master for CPU devices succeed!" ; else echo "Build MMDeploy version v${VERSION} for CPU devices succeed!" ; fi
AllentDan marked this conversation as resolved.
Show resolved Hide resolved
84 changes: 84 additions & 0 deletions docker/GPU/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
FROM nvcr.io/nvidia/tensorrt:21.04-py3

ARG CUDA=10.2
ARG PYTHON_VERSION=3.8
ARG TORCH_VERSION=1.8.0
ARG TORCHVISION_VERSION=0.9.0
ARG ONNXRUNTIME_VERSION=1.8.1
ENV FORCE_CUDA="1"

ENV DEBIAN_FRONTEND=noninteractive

### update apt and install libs
RUN apt-get update &&\
apt-get install -y vim libsm6 libxext6 libxrender-dev libgl1-mesa-glx git wget libssl-dev libopencv-dev libspdlog-dev --no-install-recommends &&\
rm -rf /var/lib/apt/lists/*

RUN curl -fsSL -v -o ~/miniconda.sh -O https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh && \
chmod +x ~/miniconda.sh && \
~/miniconda.sh -b -p /opt/conda && \
rm ~/miniconda.sh && \
/opt/conda/bin/conda install -y python=${PYTHON_VERSION} conda-build pyyaml numpy ipython cython typing typing_extensions mkl mkl-include ninja && \
/opt/conda/bin/conda clean -ya

### pytorch
RUN /opt/conda/bin/conda install pytorch==${TORCH_VERSION} torchvision==${TORCHVISION_VERSION} cudatoolkit=${CUDA} -c pytorch
ENV PATH /opt/conda/bin:$PATH

### install mmcv-full
RUN /opt/conda/bin/pip install mmcv-full==1.4.0 -f https://download.openmmlab.com/mmcv/dist/cu${CUDA//./}/torch${TORCH_VERSION}/index.html &&\
AllentDan marked this conversation as resolved.
Show resolved Hide resolved
/opt/conda/bin/pip install mmdet==2.19.0

WORKDIR /root/workspace
### get onnxruntime
RUN wget https://github.com/microsoft/onnxruntime/releases/download/v${ONNXRUNTIME_VERSION}/onnxruntime-linux-x64-${ONNXRUNTIME_VERSION}.tgz \
&& tar -zxvf onnxruntime-linux-x64-${ONNXRUNTIME_VERSION}.tgz &&\
pip install onnxruntime-gpu==${ONNXRUNTIME_VERSION}

### cp trt from pip to conda
RUN cp -r /usr/local/lib/python3.8/dist-packages/tensorrt* /opt/conda/lib/python3.8/site-packages/

### update cmake to 20
RUN wget https://github.com/Kitware/CMake/releases/download/v3.20.0/cmake-3.20.0.tar.gz &&\
tar -zxvf cmake-3.20.0.tar.gz &&\
cd cmake-3.20.0 &&\
./bootstrap &&\
make &&\
make install

### install mmdeploy
ENV ONNXRUNTIME_DIR=/root/workspace/onnxruntime-linux-x64-${ONNXRUNTIME_VERSION}
ENV TENSORRT_DIR=/workspace/tensorrt
ARG VERSION
RUN git clone https://github.com/open-mmlab/mmdeploy &&\
cd mmdeploy &&\
if [ -z ${VERSION} ] ; then echo "No MMDeploy version passed in, building on master" ; else git checkout tags/v${VERSION} -b tag_v${VERSION} ; fi &&\
git submodule update --init --recursive &&\
rm -rf build &&\
mkdir build &&\
cd build &&\
cmake -DMMDEPLOY_TARGET_BACKENDS=ort .. &&\
make -j$(nproc) &&\
cmake -DMMDEPLOY_TARGET_BACKENDS=trt .. &&\
make -j$(nproc) &&\
cd .. &&\
pip install -e .

### build sdk
RUN git clone https://github.com/openppl-public/ppl.cv.git &&\
RunningLeon marked this conversation as resolved.
Show resolved Hide resolved
cd ppl.cv &&\
./build.sh cuda
RUN cd /root/workspace/mmdeploy &&\
rm -rf build/CM* &&\
mkdir -p build && cd build &&\
cmake .. \
-DMMDEPLOY_BUILD_SDK=ON \
-DCMAKE_CXX_COMPILER=g++ \
-Dpplcv_DIR=/root/workspace/ppl.cv/cuda-build/install/lib/cmake/ppl \
-DTENSORRT_DIR=${TENSORRT_DIR} \
-DMMDEPLOY_BUILD_SDK_PYTHON_API=ON \
-DMMDEPLOY_TARGET_DEVICES="cuda;cpu" \
-DMMDEPLOY_TARGET_BACKENDS="trt" \
-DMMDEPLOY_CODEBASES=all &&\
cmake --build . -- -j$(nproc) && cmake --install . &&\
if [ -z ${VERSION} ] ; then echo "Build MMDeploy master for GPU devices succeed!" ; else echo "Build MMDeploy version v${VERSION} for GPU devices succeed!" ; fi
29 changes: 29 additions & 0 deletions docker/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
## Docker usage

We provide two dockerfiles for CPU and GPU respectively. For CPU users, we install MMDeploy with ONNXRuntime, ncnn and OpenVINO backends. For GPU users, we install MMDeploy with TensorRT backend. Besides, users can install mmdeploy with different versions when building the docker image.

### Build docker image

For CPU users, we can build the docker image with the latest MMDeploy through:
```
cd mmdeploy
docker build docker/CPU/ -t mmdeploy:master
AllentDan marked this conversation as resolved.
Show resolved Hide resolved
```
For GPU users, we can build the docker image with the latest MMDeploy through:
```
cd mmdeploy
docker build docker/GPU/ -t mmdeploy:master
```

For installing MMDeploy with a specific version, we can append `--build-arg VERSION=${VERSION}` to build command. GPU for example:
```
cd mmdeploy
docker build docker/GPU/ -t mmdeploy:0.1.0 --build-arg VERSION=0.1.0
```

### Run docker container

After building the docker image succeed, we can use `docker run` to launch the docker service. GPU docker image for example:
```
docker run --gpus all -it -p 8080:8081 mmdeploy:master
```