Log, organize and optimize Deep Learning experiments
Test tube is a python library to track and optimize Deep Learning experiments. It's framework agnostic and is built on top of the python argparse API for ease of use.
Test tube stores logs in csv files on your machine for easy analysis.
pip install test_tube
Use Test Tube if you need to:
- Track multiple Experiments across models.
- Optimize your hyperparameters using grid_search or random_search.
- Visualize experiments without uploading anywhere, logs store as csv files.
- Automatically track ALL parameters for a particular training run.
- Automatically snapshot your code for an experiment using git tags.
- Save progress images inline with training metrics.
Compatible with:
- Python 2, 3
- Tensorflow
- Keras
- Pytorch
- Caffe, Caffe2
- Chainer
- MXNet
- Theano
- Scikit-learn
- Any python based ML or DL library
- Runs seamlessly on CPU and GPU.
If you're a researcher, test-tube is highly encouraged as a way to post your paper's training logs to help add transparency and show others what you've tried that didn't work.
from test_tube import Experiment
exp = Experiment(name='dense_model', save_dir='../some/dir/')
exp.add_meta_tags({'learning_rate': 0.002, 'nb_layers': 2})
for step in range(1, 10):
tng_err = 1.0 / step
exp.add_metric_row({'tng_err': tng_err})
import pandas as pd
import matplotlib
# each experiment is saved to a metrics.csv file which can be imported anywhere
# images save to exp/version/images
df = pd.read_csv('../some/dir/test_tube_data/dense_model/version_0/metrics.csv')
df.tng_err.plot()
from test_tube import HyperOptArgumentParser
# subclass of argparse
parser = HyperOptArgumentParser(strategy='random_search')
parser.add_argument('--learning_rate', default=0.002, type=float, help='the learning rate')
# let's enable optimizing over the number of layers in the network
parser.add_opt_argument_list('--nb_layers', default=2, type=int, tunnable=True, options=[2, 4, 8])
# and tune the number of units in each layer
parser.add_opt_argument_range('--neurons', default=50, type=int, tunnable=True, start=100, end=800, nb_samples=10)
# compile (because it's argparse underneath)
hparams = parser.parse_args()
# optimize across 4 gpus
# use 2 gpus together and the other two separately
hparams.optimize_parallel_gpu_cuda(MyModel.fit, gpu_ids=['1', '2,3', '0'], nb_trials=192, nb_workers=4)
Or... across CPUs
hparams.optimize_parallel_cpu(MyModel.fit, nb_trials=192, nb_workers=12)
from test_tube import HyperOptArgumentParser
# subclass of argparse
parser = HyperOptArgumentParser(strategy='random_search')
parser.add_argument('--learning_rate', default=0.002, type=float, help='the learning rate')
# let's enable optimizing over the number of layers in the network
parser.add_opt_argument_list('--nb_layers', default=2, type=int, tunnable=True, options=[2, 4, 8])
# and tune the number of units in each layer
parser.add_opt_argument_range('--neurons', default=50, type=int, tunnable=True, start=100, end=800, nb_samples=10)
# compile (because it's argparse underneath)
hparams = parser.parse_args()
# run 20 trials of random search over the hyperparams
for hparam_trial in hparams.trials(20):
train_network(hparam_trial)
import argparse
from test_tube import HyperOptArgumentParser
# these lines are equivalent
parser = argparse.ArgumentParser(description='Process some integers.')
parser = HyperOptArgumentParser(description='Process some integers.', strategy='grid_search')
# do normal argparse stuff
...
# name must have either jpg, png or jpeg in it
img = np.imread('a.jpg')
exp.add_metric_row('test_jpg': img, 'val_err': 0.2)
# saves image to ../exp/version/media/test_0.jpg
# csv has file path to that image in that cell
Feel free to fix bugs and make improvements!
- Check out the current bugs here or feature requests.
- To work on a bug or feature, head over to our project page and assign yourself the bug.
- We'll add contributor names periodically as people improve the library!
To cite the framework use:
@misc{Falcon2017,
author = {Falcon, W.A.},
title = {Test Tube},
year = {2017},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/williamfalcon/test-tube}}
}