Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement MultiLabel losses and data input #149

Closed
sergeyk opened this issue Feb 24, 2014 · 10 comments
Closed

Implement MultiLabel losses and data input #149

sergeyk opened this issue Feb 24, 2014 · 10 comments

Comments

@sergeyk
Copy link
Contributor

sergeyk commented Feb 24, 2014

Start with reference [1].

[1] Deep Convolutional Ranking for Multilabel Image Annotation
Yunchao Gong, Yangqing Jia, Sergey Ioffe, Alexander Toshev, Thomas Leung
http://arxiv.org/abs/1312.4894

@shelhamer
Copy link
Member

See also:

[2] J. Weston, S. Bengio, and N. Usunier. Wsabie: Scaling up to large vocabulary image annotation. In
Proceedings of the Twenty-Second international joint conference on Artificial Intelligence-Volume Volume Three, pages 2764–2770. AAAI Press, 2011.
http://research.google.com/pubs/pub37180.html

[3] A. Frome, G. S. Corrado, J. Shlens, S. Bengio, J. Dean, M. A. Ranzato, and T. Mikolov. Devise: A deep visual-semantic embedding model. In Advances in Neural Information Processing Systems 26, pages 2121–2129. 2013.
http://research.google.com/pubs/pub41473.html

@sergeyk
Copy link
Contributor Author

sergeyk commented Feb 25, 2014

Note: HDF5DataLayer in #147 reads in labels as matrices already.

@kloudkl
Copy link
Contributor

kloudkl commented Feb 25, 2014

This issue is opened to be an umbrella of #144 in which @leezl has agreed to contribute back the MultiLabelDataLayer and all the related changes to proto, io etc.

The "loss function appropriate to multilabel data needs to be implemented" that @leezl talked about probably means extending the existing loss layers to support multilabel data. The todo list includes but is not limited to the SoftmaxWithLossLayer that is used by reference [1].

Besides, the WARPLossLayer implementing the ranking loss function in reference [1][2][3] in #126 demands multilabel data. In fact, any ranking loss function inherently requires multilabel data. I will keep synced with the progresses on the data layer.

@sguada
Copy link
Contributor

sguada commented May 22, 2014

@sergeyk, @shelhamer @jeffdonahue @Yangqing I'm implementing some multi-label losses and accuracy. But I would like to have the following:
Three different labels {-1,0,1}, which means negative, ignore and positive. Should I use 0 or -1 for negative, and conversely -1 or 0 for ignoring that label?

@sergeyk
Copy link
Contributor Author

sergeyk commented May 22, 2014

I'd say 0 for ignoring

@rbgirshick
Copy link
Contributor

I second that (0 for ignoring).

On Wed, May 21, 2014 at 6:18 PM, Sergey Karayev notifications@github.comwrote:

I'd say 0 for ignoring


Reply to this email directly or view it on GitHubhttps://github.com//issues/149#issuecomment-43838076
.

http://www.cs.berkeley.edu/~rbg/

@shelhamer
Copy link
Member

By multiple label, do you mean ranking loss or one-vs-all style "attribute"
classification? For one-vs-all, I sketched up a loss layer that takes a
loss type and makes a classifier + loss pair for each class.

If ranking, or other true multi-class loss, that'll be sweet to have!

Re: how to ignore, I vote for 0 too.

Le mercredi 21 mai 2014, Ross Girshick notifications@github.com a écrit :

I second that (0 for ignoring).

On Wed, May 21, 2014 at 6:18 PM, Sergey Karayev <notifications@github.comjavascript:_e(%7B%7D,'cvml','notifications@github.com');>wrote:

I'd say 0 for ignoring


Reply to this email directly or view it on GitHub<
https://github.com/BVLC/caffe/issues/149#issuecomment-43838076>
.

http://www.cs.berkeley.edu/~rbg/


Reply to this email directly or view it on GitHubhttps://github.com//issues/149#issuecomment-43846403
.

@sguada
Copy link
Contributor

sguada commented May 22, 2014

So the people have spoken, 0 would be for ignoring, -1 for negative and 1 for positive.

@shelhamer right now I'm changing the window_data_layer to generate multiple labels and modifying the cross_entropy for the loss, and creating a new multi-label_accuracy. So not a ranking loss yet.

If you could share your loss layer with me I would appreciate it.

@bhack
Copy link
Contributor

bhack commented Sep 22, 2014

Is this something different from last @sguada plans in #523 or could be closed?

@sguada
Copy link
Contributor

sguada commented Sep 22, 2014

This can be close #149 follow up in #523.

On Monday, September 22, 2014, bhack <notifications@github.com
javascript:_e(%7B%7D,'cvml','notifications@github.com');> wrote:

Is this something different from last @sguada https://github.com/sguada
plans in #523 #523 or could be closed?


Reply to this email directly or view it on GitHub
#149 (comment).

Sergio

lukeyeager pushed a commit to lukeyeager/caffe that referenced this issue May 27, 2016
Setting maximum roundable bin size to 4M
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants