Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question on models in lpo #21

Open
mtmarsh2 opened this issue Jul 29, 2015 · 4 comments
Open

Question on models in lpo #21

mtmarsh2 opened this issue Jul 29, 2015 · 4 comments

Comments

@mtmarsh2
Copy link

Hi! The paper mentions that models are trained for both Pascal VOC and COCO, as the COCO data tends to be a small part of the image, and a different model is needed to segment such small parts. In the data folder, there are files that have VOC in the name. Are the COCO models in this library already too, or do we need to train on COCO to generate those?

@philkr
Copy link
Owner

philkr commented Jul 31, 2015

I didn't upload the COCO model, as the VOC model already performs quite well on COCO too. If you want you can train them from COCO, but I'd recommend just using the VOC ones.

@mtmarsh2
Copy link
Author

Great! Also, can you explain the difference between the VOC models? In the paper its mentioned that some are better at certain tasks than others, but I couldn't tell which was which by the names in the repo. Does the "0.5" for example in lpo_VOC_0.5.dat correspond to lambda in the paper?

@philkr
Copy link
Owner

philkr commented Jul 31, 2015

It's the lambda parameter in the paper. It controls the number of proposals
produced.

On Fri, Jul 31, 2015 00:33:02 Michael Marshall wrote:

Great! Also, can you explain the difference between the VOC models? In the
paper its mentioned that some are better at certain tasks than others, but
I couldn't tell which was which by the names in the repo. Does the "0.5"
for example in lpo_VOC_0.5.dat correspond to lambda in the paper?


Reply to this email directly or view it on GitHub:
#21 (comment)

@kshalini
Copy link

@philkr, I was able to download and run the training also.

I generated a model file fairly quickly ( < 15mins) with the setting f0 0.05 using a VOC 2007 dataset which I had readily available.

but before I go on to make the box predictions, i guess the 'sf.dat' is used. with the following call:

detector = getDetector('mssf')

just a few questions:

  • Is this 'sf.dat' a standard file to be used irrespective of the dataset used for training. Or is there anything to be done to generate this file, based on the custom dataset we use?
  • There are also a few more files under /data (seed_all, seed_medium etc.) what is their significance? How / when do I to use them?

pls help! thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants