-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to train with different dataset #11
Comments
sshb.py in path |
Thank you so much after the required modifications I could get the JSON files but now I have another question, Do you have any script to train the model in Windows instead of Linux (train.sh)? |
|
@taohan10200 how to generate size_maps? Edit: |
scale maps are used to generate box annotation, which we have provided in the processed dataset. . You do not need to generate scale maps. |
But this thread is about different dataset training. So, I'm certain, I have to generate scale maps if I want to check your method for my data. Therefore, I'm asking - is the script mentioned before, correct way for scale maps generation? |
About different dataset training , if you do not need to evaluate the localization performance, the bounding box annotation is not necessary in that labeled JSON file. So scale maps can be neglect. In fact, the generation of scale maps is operated by another model. It is not provided in this project. |
Hi!
First of all I want to thank you for sharing your work (paper and code) with the community :)
On the other hand, I would like to train the model that you propose with my own dataset. Could you share the used code to obtained the ground-truth (JSONs files of SSHB images )? Thank you in advance.
The text was updated successfully, but these errors were encountered: