Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about different results when trying to reproduce #1

Closed
FabianGebben opened this issue Mar 18, 2024 · 2 comments
Closed

Question about different results when trying to reproduce #1

FabianGebben opened this issue Mar 18, 2024 · 2 comments

Comments

@FabianGebben
Copy link

Thank you for sharing your work, your paper was very interesting and the results are also very impressive!

I had a question regarding the evaluation on MSLS-val. I attempted to reproduce your results by following the repository, downloading the data, and training the model as described in the README. Initially, I trained the model solely on the MSLS dataset. I attempted to evaluate the results on MSLS by executing the following command for both my trained model and the provided trained model:

python3 eval.py --datasets_folder=/path/to/your/datasets_vg/datasets --dataset_name=msls --resume=/path/to/finetuned/msls/model/SelaVPR_msls.pth --rerank_num=100

However, these were the results that I obtained:

Model R@1 R@5 R@10
Claimed performance in README 90.8 96.4 97.2
Self-trained model 87.0 94.0 95.6
Downloaded model 86.6 93.8 95.6

Further fine-tuning the model on Pitts30k and evaluating it gave the same results as you had in your README for evaluation on Pitts30k. Therefore, I'm wondering if you could help me understand why there's a difference for the MSLS-val. Am I evaluating with the wrong data, or is there something else I might be missing?

@Lu-Feng
Copy link
Owner

Lu-Feng commented Mar 18, 2024

Hello, thanks for your interest in our work. I guess you used this repository (https://github.com/gmberton/VPR-datasets-downloader) to format the MSLS dataset and used all query images (about 11k) in MSLS-val for testing. However, the official version of MSLS val (https://github.com/mapillary/mapillary_sls) only contains 740 query images (i.e. a subset). The vast majority of VPR works use the official version of MSLS-val for testing. You can get these 740 query images through the official repository, or get the key (name) of these images here.

@FabianGebben
Copy link
Author

Yes, I indeed used that repository to format the dataset and used all the validation query images for testing. I did not know about the different subset of MSLS that is typically used for MSLS-val. This will most likely resolve the differences that I encountered. I'll use the official MSLS-val subset for testing as advised. Thank you so much for the help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants