Skip to content

Latest commit

 

History

History
31 lines (24 loc) · 1.53 KB

README.md

File metadata and controls

31 lines (24 loc) · 1.53 KB

PyTorch template for quick starting

This template provides basic structure of a PyTorch project including checkpoint saving and logging.

Use the following command line to activate run.py for training / infering the deep learning network. Detail explaination can be check by --help.

There are three ways to run the porject:

  1. Training without resuming from a checkpoint, and load your configuration in .yaml file:
    python run.py <-c|--config_path> $YOUR_CONFIG_PATH [-d|--result_dir] $YOUR_SAVE_DIR
  2. Training with resuming from a checkpoint. If you don't provide $YOUR_SAVE_DIR, the saving diectory will automatically set to $YOUR_CHECKPOINT_CONFIG_DIR:
    python run.py <-c|--config_path> $YOUR_CHECKPOINT_CONFIG_PATH [-R|--resume] [-d|--result_dir] $YOUR_SAVE_DIR 
  3. Infering. In this mode, loading configuration from $YOUR_CHECKPOINT_CONFIG_DIR and -R|--resume is disabled:
    python run.py <-c|--config_path> $YOUR_CHECKPOINT_CONFIG_PATH [I|--infer] [-d|--result_dir] $YOUR_SAVE_DIR

Beside basic deep learning training hyperparameters (learning rate, epoch, etc.), you can add parser arguments in utils/arguments.py according to your requirement.

This template has been applied on MNIST dataset for digits recognition and it ran smoothly. Feel free to use it (after you install the dataset)!

# train
python run.py -c configs/configs.yaml -d checkpoints/1/
# infer
python run.py -c checkpoints/1/configs.yaml -I -d results/1/