-
Notifications
You must be signed in to change notification settings - Fork 460
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
/multi_stage_meanfield.cpp #19
Comments
@srika91 Hi, yep. 1) you need spatial.par bilateral.par files, they allow you to set hyper-weights for class. 2) You should get the stuff running by adding crop layers into the torch-caffe-binding, as there are no crop layer implemented in the caffe upstream, while you can find this on jon's future version caffe or here. |
thanks for your reply.I cannot figure out how to do it.crop layers should be added in caffe.cpp of torch-caffe-binding? |
Not yet, as far as I know. meanwhile, you can take out the crop layers and multi_stage_meanfield layer in the prototxt. Then you should be able to use the torch-caffe-binding. |
Did you solve the problem? I got the same warning while compiling. |
@JoestarK if you are talking about the warnings at the top,I solved by commenting out all the caffe paths in the bash and removed the crfasrnn completely and 'make' again.It solved the problem,but the torch-caffe-binding i cannot figure it out yet.Also the crfasrnn am using is built with caffe in cpu mode which is working fine for me to run demos.Training myself takes long time. |
@srika91 Thanks for your reply. But I couldn't find any caffe paths in the bash. And removed the crfasrnn and 'make' again, also showing the warning like this: src/caffe/layers/multi_stage_meanfield.cpp: In instantiation of ‘void caffe::MultiStageMeanfieldLayer::LayerSetUp(const std::vectorcaffe::Blob<Dtype_>&, const std::vectorcaffe::Blob<Dtype_>&) [with Dtype = float]’: |
Do u have PYTHONPATH in bash,pointing to caffe's python? |
@srika91 There are just two paths in the bash,one is the CUDA's PATH,another is JAVA‘s |
Do u have a working caffe,if yes then I dont have idea about ur problem.Bcz in my case i commented out the previous working caffe in bash which had a PYTHONPATH pointing to python folder inside caffe main folder.And I rebuilt caffe from this repo and it worked fine for me. |
@srika91 Thanks anyway |
Is there a MultiStageMeanfieldLayerTest class for gradient checking available? The error in the first post seems to suggest that but I can't find a test for the CRFasRNN layer in the Caffe code. @srika91 |
@lynetcha @srika91 gradient check test code is available now. https://github.com/bittnt/caffe |
Thanks! |
Hi, I think I have the same problem but I don't understand what you did... ############ ############ What is funnier, is that the error message just above (Aborted etc..) indicates Forward_cpu(), but if i comment out all the code there, I pass each one of the tests! |
@PeterJackNaylor you need spatial.par bilateral.par files, they allow you to set hyper-weights for class. These files should present in the script path you run. |
Hello ,
I am trying to build caffe with gpu.Initially while compiling an error was displayed saying "could not find spatial.par ....",so I copy pasted "spatial.par" to the path it compiles and again added "bilateral.par" when it threw error on that as well.These I added from python-scripts available for python users.
When I tried to compile using "make all",I get the following warning,but it compiles all other files neatly.
src/caffe/layers/multi_stage_meanfield.cpp: In instantiation of ‘void caffe::MultiStageMeanfieldLayer::LayerSetUp(const std::vectorcaffe::Blob<Dtype_>&, const std::vectorcaffe::Blob<Dtype_>&) [with Dtype = float]’:
src/caffe/layers/multi_stage_meanfield.cpp:254:1: required from here
src/caffe/layers/multi_stage_meanfield.cpp:68:83: warning: format ‘%lf’ expects argument of type ‘double_’, but argument 3 has type ‘float_’ [-Wformat=]
fscanf(pFile, "%lf", &this->blobs_[0]->mutable_cpu_data()[i * channels_ + i]);
^
src/caffe/layers/multi_stage_meanfield.cpp:75:83: warning: format ‘%lf’ expects argument of type ‘double_’, but argument 3 has type ‘float_’ [-Wformat=]
fscanf(pFile, "%lf", &this->blobs_[1]->mutable_cpu_data()[i * channels_ + i]);
^
src/caffe/layers/multi_stage_meanfield.cpp: In member function ‘void caffe::MultiStageMeanfieldLayer::LayerSetUp(const std::vectorcaffe::Blob<Dtype_>&, const std::vectorcaffe::Blob<Dtype_>&) [with Dtype = float]’:
src/caffe/layers/multi_stage_meanfield.cpp:68:7: warning: ignoring return value of ‘int fscanf(FILE_, const char_, ...)’, declared with attribute warn_unused_result [-Wunused-result]
fscanf(pFile, "%lf", &this->blobs_[0]->mutable_cpu_data()[i * channels_ + i]);
^
src/caffe/layers/multi_stage_meanfield.cpp:75:7: warning: ignoring return value of ‘int fscanf(FILE_, const char_, ...)’, declared with attribute warn_unused_result [-Wunused-result]
fscanf(pFile, "%lf", &this->blobs_[1]->mutable_cpu_data()[i * channels_ + i]);
^
src/caffe/layers/multi_stage_meanfield.cpp: In member function ‘void caffe::MultiStageMeanfieldLayer::LayerSetUp(const std::vectorcaffe::Blob<Dtype_>&, const std::vectorcaffe::Blob<Dtype_>&) [with Dtype = double]’:
src/caffe/layers/multi_stage_meanfield.cpp:68:7: warning: ignoring return value of ‘int fscanf(FILE_, const char_, ...)’, declared with attribute warn_unused_result [-Wunused-result]
fscanf(pFile, "%lf", &this->blobs_[0]->mutable_cpu_data()[i * channels_ + i]);
^
src/caffe/layers/multi_stage_meanfield.cpp:75:7: warning: ignoring return value of ‘int fscanf(FILE_, const char_, ...)’, declared with attribute warn_unused_result [-Wunused-result]
fscanf(pFile, "%lf", &this->blobs_[1]->mutable_cpu_data()[i * channels_ + i]);
Also during "make runtest" it takes long time to test the file at ,
1 test from MultiStageMeanfieldLayerTest/2, where TypeParam = caffe::FloatGPU
[ RUN ] MultiStageMeanfieldLayerTest/2.TestGradient
Hence I stopped the test.
Once I build this, I will be trying to use the model in torch 7 using torch-caffe-binding.
Sorry for the long post
Regards
srikanth
The text was updated successfully, but these errors were encountered: