JupyterLab for AI in Docker! conda
installed. By default, the JupyterLab server runs on an Anaconda environment with PyTorch and some other commonly used libraries installed.
This docker configuration is Ubuntu 22.04 LTS, CUDA version 12.4, cuDNN 9. You may change the base system and the CUDA version listed here: nvidia/cuda | DockerHub.
CUDA Docker environment is supported by Ubuntu nvidia cuda toolkit. Instruction: CUDA and cuDNN Install | Pop!_OS. It should work on Windows as well, with WSL.
latest
: Most recent build directly from the latestmain
branch.v2.x.x
: JupyterLab installed with PyTorch GPU version2.x.x
.- Branch names: Snapshots of the project environment; refer to the branch README for more information.
Full list are available on muhac/jupyter-pytorch | DockerHub.
The image automatically runs a JupyterLab server on port 80
. Working directory in the container: /root/projects
.
PROJECT_DIR=./
SERVER_PORT=80
docker run --detach \
--name jupyter --restart unless-stopped \
--ipc=host --runtime=nvidia --gpus all \
-p $SERVER_PORT:80 \
-v $PROJECT_DIR:/root/projects \
muhac/jupyter-pytorch:latest
You can use this notebook to check your PyTorch GPU environment.
It is also possible to create your own conda environment and change /root/.bashrc
to use a different one when starting JupyterLab. If you want to do this, make sure you keep all related files synced in the host system to prevent loss after pulling a new image.