- Navigate into this repository
- Execute following command:
conda env create -f environment.yml
- Activate the environment:
conda activate landmark-recognition
- Download one or more datasets from Datasets
- It doesn't matter where you save them
- Don't rename the downloaded csv. They should be named
train
ortest
.
- Navigate into this repository
- Activate the environment
- Execute following command:
python scripts/download_dataset.py --name={FOLDER_NAME} --csv={PATH_TO_DOWNLOADED_CSV}
- You have to download train and test images separately
- Images are saved in
./data/{FOLDER_NAME}/{CSV_NAME}/{ID}.jpg
- For testing purpose you can also download only the first N images with passing
--num {N}
./data/
contains the datasets../evaluation/
contains the evaluation artifacts, such as evaluated metrics../tensorboard/
contains the tensorboard logs../log/
contains logs, such as logged stdout../experiments/
contains experiment scripts.- experiment scripts must be named after the following structure:
exp_{ID}_{NAME}.py
- experiment scripts must be named after the following structure:
- Dataset for challenge: https://www.kaggle.com/google/google-landmarks-dataset
- Bigger dataset: https://github.com/cvdfoundation/google-landmark
Start experiments with python main.py {ID} {FLAGS}
Checkpoints, evaluation artifacts, logs are stored in sub directories named after the experiment and passed flags.
Only for testing purpose.
Crop input image into 5 sub images and extract features on each one. Uses triplet loss. Does not work yet!