Skip to content

Latest commit

 

History

History
55 lines (43 loc) · 2.19 KB

README.md

File metadata and controls

55 lines (43 loc) · 2.19 KB

tensorflow-compiler

Script to setup and compile tensorflow from scratch

Motivation

Compiling Tensorflow from source help accelerate training time. However, it's not an easy task. Users usually face 4 majors problems:

  • Packages are not available on their current version of Ubuntu
  • Messy, inflated storage after installing build tools
  • Official Docker devel images do not support every version of Tensorflow
  • The compilation procedure is lengthy and you have to it once for every hardware config

Usage

  1. Install and setup NVIDIA Container Toolkit

If you intend to compile Tensorflow with NVIDIA GPU, you will need NVIDIA Container Toolkit to pass your GPUs to Docker container.

If you encounter error cgroup subsystem devices not found: unknown., refer to the workaround here.

  1. Generate .env file for docker-compose.
sh generate_env.sh
  1. Launch the docker-compose
docker-compose build [--no-cache] tensorflow-compiler-<gpu|cpu>
docker-compose run -it --rm [--gpus all] \
    --device /dev/nvidia0 --device /dev/nvidia-modeset  --device /dev/nvidia-uvm --device /dev/nvidia-uvm-tools --device /dev/nvidiactl \
    --network host \
    -v "$(realpath ./tensorflow):/tmp/tensorflow_pkg" \
    tensorflow-compiler-<gpu|cpu> 

# When inside the docker container, run:
sh build.sh

# Or do whatever you want 

A convenient script launch.sh is also available at your disposal:

sh launch sh build|start gpu|cpu
  1. Retrieve compiled .whl file at host's tensorflow directory:
# In a new terminal
cd  tensorflow-compiler/
cp tensorflow/tensorflow_<tf_ver>.py<py_ver>.whl path/to/permanant/storage/
pip install path/to/permanant/storage/tensorflow_<tf_ver>.py<py_ver>.whl

Contribution