-
Notifications
You must be signed in to change notification settings - Fork 18.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Regression Loss #210
Comments
I have the same question and not sure if I am using the HDF5DataLayer in the right way. Hope someone here can give me some guidance. In the file generate_sample_data.py: data = np.arange(total_size) So, here we have batch_size = 10, input channel size = 8, and image size of 5 x 5. Am I right? Then using the EuclideanLossLayer, will finally suit the need of stibor. But I am not sure if EuclideanLossLayer has already supported such 2-D regression task. |
The second reshape you give looks correct. I'm not sure about the Euclidean On Tuesday, March 18, 2014, ChenglongChen notifications@github.com wrote:
|
I have tried several models with different parameters for regression with Euclidean loss layer. The output of the final fc layer is no better than the mean of input points. |
@yzheng624 I have the same problem, at each iteration the output of the final fc is almost the same no matter what the input is . I don't know whether you have solved your problem. |
I had a similar problem initially when I replaced the logistic regression from the imagenet model with a regression loss. This problem, was however, solved when I copied layer 1 and 2 from the reference model instead of training them. Also, when initially the outputs were almost the same, they were becoming equal after layer 3 (not entirely sure whether layer 3 or layer 4). Note that I was using a leveldb for input and was regressing 25 outputs. Did you try a shallow model without a normalization layer for your data? |
See #881 for the latest suggestions. |
Hi there,
I have a set of images of galaxies and labels are (x_1, x_2) \in [-1,1] which are related
to principle axes of the galaxies, so the learning problem is 2D regression. Can somebody give me hint how implement that in caffe.
Would it be the right way to add in caffe.proto ( message Datum {...} )
an additional field, e.g.
optional float label_float = 7;
and use
EuclideanLossLayer<Dtype>
?Thanks,
Thomas
The text was updated successfully, but these errors were encountered: