This template provides basic structure of a PyTorch project including checkpoint saving and logging.
Use the following command line to activate run.py
for training / infering the deep learning network. Detail explaination can be check by --help
.
There are three ways to run the porject:
- Training without resuming from a checkpoint, and load your configuration in
.yaml
file:python run.py <-c|--config_path> $YOUR_CONFIG_PATH [-d|--result_dir] $YOUR_SAVE_DIR
- Training with resuming from a checkpoint. If you don't provide
$YOUR_SAVE_DIR
, the saving diectory will automatically set to$YOUR_CHECKPOINT_CONFIG_DIR
:python run.py <-c|--config_path> $YOUR_CHECKPOINT_CONFIG_PATH [-R|--resume] [-d|--result_dir] $YOUR_SAVE_DIR
- Infering. In this mode, loading configuration from
$YOUR_CHECKPOINT_CONFIG_DIR
and-R|--resume
is disabled:python run.py <-c|--config_path> $YOUR_CHECKPOINT_CONFIG_PATH [I|--infer] [-d|--result_dir] $YOUR_SAVE_DIR
Beside basic deep learning training hyperparameters (learning rate, epoch, etc.), you can add parser arguments in utils/arguments.py
according to your requirement.
This template has been applied on MNIST dataset for digits recognition and it ran smoothly. Feel free to use it (after you install the dataset)!
# train
python run.py -c configs/configs.yaml -d checkpoints/1/
# infer
python run.py -c checkpoints/1/configs.yaml -I -d results/1/