Skip to content

Commit

Permalink
Merge branch 'ros2' into ros2-efficient-ps
Browse files Browse the repository at this point in the history
  • Loading branch information
vniclas committed Aug 22, 2022
2 parents d771ef0 + 23f37a5 commit 4b818f8
Show file tree
Hide file tree
Showing 42 changed files with 1,491 additions and 447 deletions.
16 changes: 9 additions & 7 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
FROM ubuntu:20.04

ARG branch
ARG branch=master

# Install dependencies
RUN apt-get update && \
apt-get --yes install git sudo
RUN DEBIAN_FRONTEND="noninteractive" apt-get -y install tzdata
apt-get --yes install git sudo && \
DEBIAN_FRONTEND="noninteractive" apt-get -y install tzdata

# Add Tini
ENV TINI_VERSION v0.19.0
Expand All @@ -16,12 +16,14 @@ ENTRYPOINT ["/tini", "--"]
# Clone the repo and install the toolkit
RUN git clone --depth 1 --recurse-submodules -j8 https://github.com/opendr-eu/opendr -b $branch
WORKDIR "/opendr"
RUN ./bin/install.sh
RUN ./bin/install.sh && \
rm -rf /root/.cache/* && \
apt-get clean

# Create script for starting Jupyter Notebook
RUN /bin/bash -c "source ./bin/activate.sh; pip3 install jupyter"
RUN echo "#!/bin/bash\n source ./bin/activate.sh\n ./venv/bin/jupyter notebook --port=8888 --no-browser --ip 0.0.0.0 --allow-root" > start.sh
RUN chmod +x start.sh
RUN /bin/bash -c "source ./bin/activate.sh; pip3 install jupyter" && \
echo "#!/bin/bash\n source ./bin/activate.sh\n ./venv/bin/jupyter notebook --port=8888 --no-browser --ip 0.0.0.0 --allow-root" > start.sh && \
chmod +x start.sh

# Start Jupyter Notebook inside OpenDR
CMD ["./start.sh"]
18 changes: 9 additions & 9 deletions Dockerfile-cuda
Original file line number Diff line number Diff line change
@@ -1,17 +1,15 @@
FROM nvidia/cuda:11.2.0-cudnn8-devel-ubuntu20.04

ARG branch
ARG branch=master

# Fix NVIDIA CUDA Linux repository key rotation
ENV APT_KEY_DONT_WARN_ON_DANGEROUS_USAGE=1
RUN apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu$(cat /etc/os-release | grep VERSION_ID | awk '{print substr($0,13,5)}' | awk -F'.' '{print $1$2}')/x86_64/3bf863cc.pub

ARG branch

# Install dependencies
RUN apt-get update && \
apt-get --yes install git sudo apt-utils
RUN DEBIAN_FRONTEND="noninteractive" apt-get -y install tzdata
apt-get --yes install git sudo apt-utils && \
DEBIAN_FRONTEND="noninteractive" apt-get -y install tzdata

# Add Tini
ENV TINI_VERSION v0.19.0
Expand All @@ -25,12 +23,14 @@ RUN sudo apt-get --yes install build-essential
ENV OPENDR_DEVICE gpu
RUN git clone --depth 1 --recurse-submodules -j8 https://github.com/opendr-eu/opendr -b $branch
WORKDIR "/opendr"
RUN ./bin/install.sh
RUN ./bin/install.sh && \
rm -rf /root/.cache/* && \
apt-get clean

# Create script for starting Jupyter Notebook
RUN /bin/bash -c "source ./bin/activate.sh; pip3 install jupyter"
RUN echo "#!/bin/bash\n source ./bin/activate.sh\n ./venv/bin/jupyter notebook --port=8888 --no-browser --ip 0.0.0.0 --allow-root" > start.sh
RUN chmod +x start.sh
RUN /bin/bash -c "source ./bin/activate.sh; pip3 install jupyter" && \
echo "#!/bin/bash\n source ./bin/activate.sh\n ./venv/bin/jupyter notebook --port=8888 --no-browser --ip 0.0.0.0 --allow-root" > start.sh && \
chmod +x start.sh

# Start Jupyter Notebook inside OpenDR
CMD ["./start.sh"]
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ ______________________________________________________________________
<a href="docs/reference/installation.md">Installation</a> •
<a href="#using-opendr-toolkit">Using OpenDR toolkit</a> •
<a href="projects">Examples</a> •
<a href="docs/reference/customize.md">Customization</a> •
<a href="#roadmap">Roadmap</a> •
<a href="CHANGELOG.md">Changelog</a> •
<a href="LICENSE">License</a>
Expand Down
3 changes: 3 additions & 0 deletions bin/install.sh
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,9 @@ sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main
make install_compilation_dependencies
make install_runtime_dependencies

# Install additional ROS packages
sudo apt-get install ros-noetic-vision-msgs ros-noetic-audio-common-msgs

# If working on GPU install GPU dependencies as needed
if [[ "${OPENDR_DEVICE}" == "gpu" ]]; then
pip3 uninstall -y mxnet
Expand Down
58 changes: 58 additions & 0 deletions docs/reference/customize.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
# Customizing the toolkit

OpenDR is fully open-source and can be readily customized to meet the needs of several different application areas, since the source code for all the developed tools is provided.
Several ready-to-use examples, which are expected to cover a wide range of different needs, are provided.
For example, users can readily use the existing [ROS nodes](projects/opendr_ws), e.g., by including the required triggers or by combining several nodes into one to build custom nodes that will fit their needs.
Furthermore, note that several tools can be combined within a ROS node, as showcased in [face recognition ROS node](projects/opendr_ws/src/perception/scripts/face_recognition.py).
You can use these nodes as a template for customizing the toolkit to your own needs.
The rest of this document includes instructions for:
1. Building docker images using the provided docker files.


## Building custom docker images
The default docker images can be too large for some applications.
OpenDR provides the dockerfiles for customizing the images to your own needs, e.g., using OpenDR in custom third-party images.
Therefore, you can build the docker images locally using the [Dockerfile](/Dockerfile) ([Dockerfile-cuda](/Dockerfile-cuda) for cuda) provided in the root folder of the toolkit.

### Building the CPU image
For the CPU image, execute the following commands:
```bash
git clone --depth 1 --recurse-submodules -j8 https://github.com/opendr-eu/opendr
cd opendr
sudo docker build -t opendr/opendr-toolkit:cpu .
```

### Building the CUDA image
For the cuda-enabled image, first edit `/etc/docker/daemon.json` in order to set the default docker runtime:
```
{
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
},
"default-runtime": "nvidia"
}
```

Restart docker afterwards:
```
sudo systemctl restart docker.service
```
Then you can build the supplied dockerfile:
```bash
git clone --depth 1 --recurse-submodules -j8 https://github.com/opendr-eu/opendr
cd opendr
sudo docker build -t opendr/opendr-toolkit:cuda -f Dockerfile-cuda .
```

### Running the custom images
In order to run them, the commands are respectively:
```bash
sudo docker run -p 8888:8888 opendr/opendr-toolkit:cpu
```
and
```
sudo docker run --gpus all -p 8888:8888 opendr/opendr-toolkit:cuda
```
138 changes: 54 additions & 84 deletions docs/reference/installation.md
Original file line number Diff line number Diff line change
@@ -1,68 +1,29 @@
# Installing OpenDR toolkit

OpenDR can be installed in the following ways:
1. By cloning this repository (CPU/GPU support)
2. Using *pip* (CPU/GPU support)
3. Using *docker* (CPU/GPU support)
1. Using *pip* (CPU/GPU support)
2. Using *docker* (CPU/GPU support)
3. By cloning this repository (CPU/GPU support, for advanced users only)

The following table summarizes the installation options based on your system architecture and OS:

| Installation Method | CPU/GPU | OS |
|---------------------|----------|-----------------------|
| Clone & Install | Both | Ubuntu 20.04 (x86-64) |
| pip | Both | Ubuntu 20.04 (x86-64) |
| docker | Both | Linux / Windows |
| Installation Method | OS |
|-----------------------|-----------------------|
| Clone & Install | Ubuntu 20.04 (x86-64) |
| pip | Ubuntu 20.04 (x86-64) |
| docker | Linux / Windows |

Note that pip installation includes only the Python API of the toolkit.
If you need to use all the functionalities of the toolkit (e.g., ROS nodes, etc.), then you need either to use the pre-compiled docker images or to follow the installation instructions for cloning and building the toolkit.

# Installing by cloning OpenDR repository (Ubuntu 20.04, x86, architecture)

This is the recommended way of installing the whole toolkit, since it allows for fully exploiting all the provided functionalities.
To install the toolkit, please first make sure that you have `git` available on your system.
```bash
sudo apt install git
```
Then, clone the toolkit:
```bash
git clone --depth 1 --recurse-submodules -j8 https://github.com/opendr-eu/opendr
```
You are then ready to install the toolkit:
```bash
cd opendr
./bin/install.sh
```
The installation script automatically installs all the required dependencies.
Note that this might take a while (~10-20min depending on your machine and network connection), while the script also makes system-wide changes.
Using dockerfiles is strongly advised (please see below), unless you know what you are doing.
Please also make sure that you have enough RAM available for the installation (about 4GB of free RAM is needed for the full installation/compilation).


If you want to install GPU-related dependencies, then you can appropriately set the `OPENDR_DEVICE` variable.
The toolkit defaults to using CPU.
Therefore, if you want to use GPU, please set this variable accordingly *before* running the installation script:
```bash
export OPENDR_DEVICE=gpu
```
The installation script creates a *virtualenv*, where the toolkit is installed.
To activate OpenDR environment you can just source the `activate.sh`:
The toolkit is developed and tested on *Ubuntu 20.04 (x86-64)*.
Please make sure that you have the most recent version of all tools by running
```bash
source ./bin/activate.sh
sudo apt upgrade
```
Then, you are ready to use the toolkit!

**NOTE:** `OPENDR_DEVICE` does not alter the inference/training device at *runtime*.
It only affects the dependency installation.
You can use OpenDR API to change the inference device.

You can also verify the installation by using the supplied Python and C unit tests:
```bash
make unittest
make ctests
```

If you plan to use GPU-enabled functionalities, then you are advised to install [CUDA 11.2](https://developer.nvidia.com/cuda-11.2.0-download-archive), along with [CuDNN](https://developer.nvidia.com/cudnn).

**HINT:** All tests probe for the `TEST_DEVICE` enviromental variable when running.
If this enviromental variable is set during testing, it allows for easily running all tests on a different device (e.g., setting `TEST_DEVICE=cuda:0` runs all tests on the first GPU of the system).
before installing the toolkit and then follow the installation instructions in the relevant section.
All the required dependencies will be automatically installed (or explicit instructions are provided).
Other platforms apart from Ubuntu 20.04, e.g., Windows, other Linux distributions, etc., are currently supported through docker images.

# Installing using *pip*

Expand Down Expand Up @@ -175,45 +136,54 @@ In this case, do not forget to enable the virtual environment with:
```bash
source bin/activate.sh
```
## Build the docker images yourself _(optional)_
Alternatively you can also build the docker images locally using the [Dockerfile](/Dockerfile) ([Dockerfile-cuda](/Dockerfile-cuda) for cuda) provided in the root folder of the toolkit.

For the CPU image, execute the following commands:
# Installing by cloning OpenDR repository (Ubuntu 20.04, x86, architecture)

This is the recommended way of installing the whole toolkit, since it allows for fully exploiting all the provided functionalities.
To install the toolkit, please first make sure that you have `git` available on your system.
```bash
sudo apt install git
```
Then, clone the toolkit:
```bash
git clone --depth 1 --recurse-submodules -j8 https://github.com/opendr-eu/opendr
```
You are then ready to install the toolkit:
```bash
cd opendr
sudo docker build -t opendr/opendr-toolkit:cpu .
./bin/install.sh
```
The installation script automatically installs all the required dependencies.
Note that this might take a while (~10-20min depending on your machine and network connection), while the script also makes system-wide changes.
Using dockerfiles is strongly advised (please see below), unless you know what you are doing.
Please also make sure that you have enough RAM available for the installation (about 4GB of free RAM is needed for the full installation/compilation).

For the cuda-enabled image, first edit `/etc/docker/daemon.json` in order to set the default docker runtime:
```
{
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
},
"default-runtime": "nvidia"
}
```

Restart docker afterwards:
```
sudo systemctl restart docker.service
If you want to install GPU-related dependencies, then you can appropriately set the `OPENDR_DEVICE` variable.
The toolkit defaults to using CPU.
Therefore, if you want to use GPU, please set this variable accordingly *before* running the installation script:
```bash
export OPENDR_DEVICE=gpu
```
Then you can build the supplied dockerfile:
The installation script creates a *virtualenv*, where the toolkit is installed.
To activate OpenDR environment you can just source the `activate.sh`:
```bash
git clone --depth 1 --recurse-submodules -j8 https://github.com/opendr-eu/opendr
cd opendr
sudo docker build -t opendr/opendr-toolkit:cuda -f Dockerfile-cuda .
source ./bin/activate.sh
```
Then, you are ready to use the toolkit!

**NOTE:** `OPENDR_DEVICE` does not alter the inference/training device at *runtime*.
It only affects the dependency installation.
You can use OpenDR API to change the inference device.

In order to run them, the commands are respectively:
You can also verify the installation by using the supplied Python and C unit tests:
```bash
sudo docker run --gpus all -p 8888:8888 opendr/opendr-toolkit:cpu
```
and
```
sudo docker run --gpus all -p 8888:8888 opendr/opendr-toolkit:cuda
make unittest
make ctests
```

If you plan to use GPU-enabled functionalities, then you are advised to install [CUDA 11.2](https://developer.nvidia.com/cuda-11.2.0-download-archive), along with [CuDNN](https://developer.nvidia.com/cudnn).

**HINT:** All tests probe for the `TEST_DEVICE` enviromental variable when running.
If this enviromental variable is set during testing, it allows for easily running all tests on a different device (e.g., setting `TEST_DEVICE=cuda:0` runs all tests on the first GPU of the system).

Loading

0 comments on commit 4b818f8

Please sign in to comment.