Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

one shot CNN for both detection and feature extraction #7

Closed
Grabber opened this issue Apr 18, 2017 · 7 comments
Closed

one shot CNN for both detection and feature extraction #7

Grabber opened this issue Apr 18, 2017 · 7 comments

Comments

@Grabber
Copy link

Grabber commented Apr 18, 2017

Is there any code showing the loss function to train the current residual network?
What the authors think about having a single network to do detection and feature extraction? Is it possible?

@nwojke
Copy link
Owner

nwojke commented May 1, 2017

The training code is not yet published, but that should be a follow-up at some point.

About detection and feature extraction in a single network: We did some initial experiments where we fine-tuned the final layer of a pre-trained py-faster-rcnn on VOC, but it did not work that well for re-identification / tracking. It would be interesting if you get reasonable results, though.

@Grabber
Copy link
Author

Grabber commented May 1, 2017

@nwojke

Are you using a cosine loss function on the network that is extracting features for re-identification? Could you share it, please?

What I'm thinking about is use object detection patch-features as the input for re-identification network, on a single shot. The object detection features already know how to describe pedestrians, for example. I think it is a better idea than just feeding image pedestrian-patches to a new network.

What do you think?

@nwojke
Copy link
Owner

nwojke commented Jun 14, 2017

Sorry for the late reply. We do use a cosine loss function, but haven't found the time to release the training code yet. I will let you know when we get there. Until then, you could experiment with some other well established loss formulations (e.g., [1,2]) and check how well they perform when providing as input the faster-rcnn features. The tracker supports changing from cosine to Euclidean metric, just change "cosine" to "euclidean" in deep_sort_app.py:162.

[1] https://arxiv.org/abs/1703.07737
[2] https://arxiv.org/abs/1511.05939

@nwojke
Copy link
Owner

nwojke commented Jul 24, 2017

Closing this one down due to inactivity.

@tonmoyborah
Copy link

@nwojke Is the code for training on objects other than pedestrians out by now?

@ovgeorge
Copy link

ovgeorge commented Jul 8, 2018

Is the code for training on objects other than pedestrians out by now?

@abewley
Copy link
Collaborator

abewley commented Jul 8, 2018

Please see @nwojke 's other repo cosine_metric_learning for training code and associated reference.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants