Skip to content

iterative/example-get-started-experiments

Repository files navigation

DVC DVC-metrics

Train Report - Evaluation Report

DVC Get Started: Experiments

This is an auto-generated repository for use in DVC Get Started: Experiments.

This is a Computer Vision (CV) project that solves the problem of segmenting out swimming pools from satellite images.

Example results

We use a slightly modified version of the BH-Pools dataset: we split the original 4k images into tiles of 1024x1024 pixels.

🐛 Please report any issues found in this project here - example-repos-dev.

Installation

Python 3.8+ is required to run code from this repo.

$ git clone https://github.com/iterative/example-get-started-experiments
$ cd example-get-started-experiments

Now let's install the requirements. But before we do that, we strongly recommend creating a virtual environment with a tool such as virtualenv:

$ python -m venv .venv
$ source .venv/bin/activate
$ pip install -r requirements.txt

This DVC project comes with a preconfigured DVC remote storage that holds raw data (input), intermediate, and final results that are produced. This is a read-only HTTP remote.

$ dvc remote list
storage  https://remote.dvc.org/get-started-pools

You can run dvc pull to download the data:

$ dvc pull

Running in your environment

Run dvc exp run to reproduce the pipeline:

$ dvc exp run
Data and pipelines are up to date.

If you'd like to test commands like dvc push, that require write access to the remote storage, the easiest way would be to set up a "local remote" on your file system:

This kind of remote is located in the local file system, but is external to the DVC project.

$ mkdir -p /tmp/dvc-storage
$ dvc remote add local /tmp/dvc-storage

You should now be able to run:

$ dvc push -r local

Existing stages

There is a couple of git tags in this project :

Contains an end-to-end Jupyter notebook that loads data, trains a model and reports model performance. DVCLive is used for experiment tracking. See this blog post for more details.

Contains a DVC pipeline dvc.yaml that was created by refactoring the above notebook into individual pipeline stages.

The pipeline artifacts (processed data, model file, etc) are automatically versioned.

This tag also contains a GitHub Actions workflow that reruns the pipeline if any changes are introduced to the pipeline-related files. CML is used in this workflow to provision a cloud-based GPU machine as well as report model performance results in Pull Requests.

Model Deployment

Check out the GitHub Workflow that uses the Iterative Studio Model Registry. to deploy the model to AWS Sagemaker whenever a new version is registered.

Project structure

The data files, DVC files, and results change as stages are created one by one. After cloning and using dvc pull to download data, models, and plots tracked by DVC, the workspace should look like this:

$ tree -L 2
.
├── LICENSE
├── README.md
├── data.            # <-- Directory with raw and intermediate data
│   ├── pool_data    # <-- Raw image data
│   ├── pool_data.dvc # <-- .dvc file - a placeholder/pointer to raw data
│   ├── test_data    # <-- Processed test data
│   └── train_data   # <-- Processed train data
├── dvc.lock
├── dvc.yaml         # <-- DVC pipeline file
├── models
│   └── model.pkl    # <-- Trained model file
├── notebooks
│   └── TrainSegModel.ipynb # <-- Initial notebook (refactored into `dvc.yaml`) 
├── params.yaml      # <-- Parameters file
├── requirements.txt # <-- Python dependencies needed in the project
├── results          # <-- DVCLive reports and plots
│   ├── evaluate
│   └── train
└── src              # <-- Source code to run the pipeline stages
    ├── data_split.py
    ├── evaluate.py
    └── train.py