-
Notifications
You must be signed in to change notification settings - Fork 18.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement MultiLabel losses and data input #149
Comments
See also: [2] J. Weston, S. Bengio, and N. Usunier. Wsabie: Scaling up to large vocabulary image annotation. In [3] A. Frome, G. S. Corrado, J. Shlens, S. Bengio, J. Dean, M. A. Ranzato, and T. Mikolov. Devise: A deep visual-semantic embedding model. In Advances in Neural Information Processing Systems 26, pages 2121–2129. 2013. |
Note: HDF5DataLayer in #147 reads in labels as matrices already. |
This issue is opened to be an umbrella of #144 in which @leezl has agreed to contribute back the MultiLabelDataLayer and all the related changes to proto, io etc. The "loss function appropriate to multilabel data needs to be implemented" that @leezl talked about probably means extending the existing loss layers to support multilabel data. The todo list includes but is not limited to the SoftmaxWithLossLayer that is used by reference [1]. Besides, the WARPLossLayer implementing the ranking loss function in reference [1][2][3] in #126 demands multilabel data. In fact, any ranking loss function inherently requires multilabel data. I will keep synced with the progresses on the data layer. |
@sergeyk, @shelhamer @jeffdonahue @Yangqing I'm implementing some multi-label losses and accuracy. But I would like to have the following: |
I'd say 0 for ignoring |
I second that (0 for ignoring). On Wed, May 21, 2014 at 6:18 PM, Sergey Karayev notifications@github.comwrote:
|
By multiple label, do you mean ranking loss or one-vs-all style "attribute" If ranking, or other true multi-class loss, that'll be sweet to have! Re: how to ignore, I vote for 0 too. Le mercredi 21 mai 2014, Ross Girshick notifications@github.com a écrit :
|
So the people have spoken, 0 would be for ignoring, -1 for negative and 1 for positive. @shelhamer right now I'm changing the window_data_layer to generate multiple labels and modifying the cross_entropy for the loss, and creating a new multi-label_accuracy. So not a ranking loss yet. If you could share your loss layer with me I would appreciate it. |
This can be close #149 follow up in #523. On Monday, September 22, 2014, bhack <notifications@github.com
Sergio |
Setting maximum roundable bin size to 4M
Start with reference [1].
[1] Deep Convolutional Ranking for Multilabel Image Annotation
Yunchao Gong, Yangqing Jia, Sergey Ioffe, Alexander Toshev, Thomas Leung
http://arxiv.org/abs/1312.4894
The text was updated successfully, but these errors were encountered: