Skip to content
/ journeycv Public template
forked from just-eoghan/journeycv

A provenance framework for computer vision experiments

Notifications You must be signed in to change notification settings

danieleceUL/journeycv

 
 

Repository files navigation

journeycv

Python PyTorch Lightning Config: hydra Code style: black

A data provenance framework for building better deep learning computer vision projects

PyTorch Lightning Config: Hydra


Click on Use this template to initialize new repository.


Check out the sample wandb dashboard for this run here https://wandb.ai/6097105/example_run

Introduction

In computer vision it is fair to say that the journey (experimental process) is just as important as the destination (trained model). There are various input and output components which need to be tracked in order to ensure provenance such as hyperparameters, input data and output test metrics. While many deep learning practitioners will likely version control their model architecture through a familiar text based tool such as git or svn, the process for tracking the hyperparameters, dataset, generated model weights and test metrics etc. cannot always be achieved easily using the same methodologies.

image

Getting Started

Follow these steps to set up the repository

# clone the project
git clone https://github.com/deepseek-eoghan/journeycv
cd journeycv

conda create -n journeycv python=3.8
conda activate journeycv

conda install -c anaconda cython
pip install -r requirements.txt

This can't be included in the requirements file as it has pre-install dependencies.
pip install git+https://github.com/philferriere/cocoapi.git#subdirectory=PythonAPI

Framework structure

├── configs                 <- Hydra configuration files
│   ├── callbacks               <- Callbacks configs
│   ├── datamodule              <- Datamodule configs
│   ├── experiment              <- Experiment configs
│   ├── local                   <- Local configs
│   ├── logger                  <- Logger configs
│   ├── mode                    <- Running mode configs
│   ├── model                   <- Model configs
│   ├── trainer                 <- Trainer configs
│   │
│   └── config.yaml             <- Main project configuration file
│
├── data                    <- Project data
│
├── logs                    <- Logs generated by Hydra and PyTorch 
│
├── src                     <- Project source code
│   ├── callbacks               <- Lightning callbacks
│   ├── datamodules             <- Lightning datamodules
│   ├── models                  <- Lightning models
│   ├── utils                   <- Utility scripts
│   └── train.py                <- Training pipeline
│
├── run.py                  <- Run pipeline with chosen configuration
│
├── .env.example            <- For storing private environment variables
├── .gitignore              <- List of files/folders ignored by git
├── .pre-commit-config.yaml <- Configuration of automatic code formatting
├── setup.cfg               <- Configurations of linters and pytest
├── requirements.txt        <- File for installing python dependencies
└── README.md

How it works / Extending for your project

An end to end high-level run-through of executing experiments using our framework demonstrates how this workflow achieves provenance for computer vision experiments. As this is a high level overview the inner working of the blocks will be described in the framework structure section later on in the paper. The process starts by writing the source code which is broken into three different parts. Firstly, one must write a datamodule to download, transform and pass data to the model. Secondly, a lightning module which is essentially the model is written. Finally, callbacks which encompass any ancillary code such as logging or notifications is written. Datamodules and models are passed a hyperparameter object which encompasses all the configuration data.

Configuration files are created with key value pairs to pass to the source code. Initially base configurations are created. Finally specialized configuration files named “experiments” are written which override the base configurations to tailor the run to a specific scheme.

image

At this point the framework is ready to be run and a job can be started by simply calling the following command;

python run.py experiment=your_experiment

Once the run has started, because cloud based logging is included in the framework, all configuration parameters are written to an immutable run file and metrics such as test set loss and validation accuracy are updated to the cloud on each epoch. Once the run is finished and the final logs have been compiled there exists a record of the run containing all the provenance information for the run. This run history can be used to prove results or to easily reproduce them.

Credits

This project was generated with Template

About

A provenance framework for computer vision experiments

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 95.8%
  • Dockerfile 2.5%
  • Shell 1.7%