Skip to content

Latest commit

 

History

History
55 lines (33 loc) · 3.17 KB

README.md

File metadata and controls

55 lines (33 loc) · 3.17 KB

Simple_VPR_codebase

This repository serves as a starting point to implement a VPR pipeline. It allows you to train a simple ResNet-18 on the GSV dataset. It relies on the pytorch_metric_learning library.

Download datasets

NB: if you are using Colab, skip this section

The following script:

python download_datasets.py

allows you to download GSV_xs, SF_xs, tokyo_xs, which are reduced version of the GSVCities, SF-XL, Tokyo247 datasets respectively.

Install dependencies

NB: if you are using Colab, skip this section

You can install the required packages by running

pip install -r requirements.txt

Run an experiment

You can choose to validate/test on sf_xs or tokyo_xs.

python main.py --train_path /path/to/datasets/gsv_xs --val_path /path/to/datasets/tokyo_xs/test --test_path /path/to/datasets/tokyo_xs/test --exp_name expname

Resuming from checkpoint

The code will save the best (according to validation score) and last models. If you your experiment dies and you want to resume from where you left off, you can simply run you experiment passing the argument --checkpoint model_path. You can find the model checkpoints under logs/lightning_logs/exp_name/checkpoints

Logging

The code will log everything under the directory logs/lightning_logs/exp_name. You will find the models under checkpoints. All the textual outputs generated by the code is saved in 2 files, namely logs/lightning_logs/exp_name/debug.log and logs/lightning_logs/exp_name/info.log, where typically info.log contains only relevant info whereas debug.log is a superset of it and contains additional (typically useless) prints. If you want to add any prints to the code, you can do so by using the functions logging.debug of logging.info. Finally, in this directory you will see some binary files generated by tensorboard, that you can use with the proper library. Once you install tensorboard via pip (check the documentation on how to do it), you can download to your local machine the logs directory and inspect the logs using tensorboard --logdir logs/lightning_logs. It will launch a webserver running on localhost:6006, that you can inspect using your browser

Running evaluations

Once you have trained your models, you can run an inference only step using the eval.py script, and passing the --checkpoint argument to specify the model checkpoint to load

python eval.py --checkpoint logs/lightning_logs/exp_name/c heckpoints/_epoch(01)_R@1[30.4762]_R@5[49.2063].ckpt --train_path data/gsv_xs --val_path data/tokyo_xs/test --test_path data/tokyo_xs/test --exp_name test_model

Usage on Colab

We provide you with the notebook colab_example.ipynb. It shows you how to attach your GDrive file system to Colab, unzip the datasets, install packages and run your first experiment.

NB: BEFORE running this notebook, you must copy the datasets zip into your GDrive. You can use the link that we provided and simply click 'create a copy'. Make sure that you have enough space (roughly 8 GBs)

NB^2: you can ignore the dataset robotcar_one_every_2m.