Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

make runtest error #1

Closed
priyapaul opened this issue Jan 30, 2017 · 7 comments
Closed

make runtest error #1

priyapaul opened this issue Jan 30, 2017 · 7 comments

Comments

@priyapaul
Copy link

I tried to make the code: It gives me the following error:

`$ make runtest
.build_release/tools/caffe
caffe: command line brew
usage: caffe

commands:
train train or finetune a model
test score a model
device_query show GPU diagnostic information
time benchmark model execution time

Flags from tools/caffe.cpp:
-gpu (Run in GPU mode on given device ID.) type: int32 default: -1
-iterations (The number of iterations to run.) type: int32 default: 50
-model (The model definition protocol buffer text file..) type: string
default: ""
-snapshot (Optional; the snapshot solver state to resume training.)
type: string default: ""
-solver (The solver definition protocol buffer text file.) type: string
default: ""
-weights (Optional; the pretrained weights to initialize finetuning. Cannot
be set simultaneously with snapshot.) type: string default: ""
.build_release/test/test_all.testbin 0 --gtest_shuffle
Cuda number of devices: 8
Setting to use device 0
Current device id: 0
Note: Randomizing tests' orders with a seed of 10499 .
[==========] Running 1092 tests from 198 test cases.
[----------] Global test environment set-up.
[----------] 6 tests from NesterovSolverTest/3, where TypeParam = caffe::DoubleG PU
[ RUN ] NesterovSolverTest/3.TestNesterovLeastSquaresUpdate
F0130 14:06:28.077040 39193 solver.cpp:60] No grl and fc interval parameters!
*** Check failure stack trace: ***
@ 0x2b06c60c4daa (unknown)
@ 0x2b06c60c4ce4 (unknown)
@ 0x2b06c60c46e6 (unknown)
@ 0x2b06c60c7687 (unknown)
@ 0x2b06c7a006eb caffe::Solver<>::Init()
@ 0x2b06c7a00856 caffe::Solver<>::Solver()
@ 0x4f9302 caffe::NesterovSolverTest<>::InitSolver()
@ 0x4f9bcb caffe::GradientBasedSolverTest<>::InitSolverFromProtoS tring()
@ 0x4ede9a caffe::GradientBasedSolverTest<>::RunLeastSquaresSolve r()
@ 0x4f1613 caffe::NesterovSolverTest_TestNesterovLeastSquaresUpda te_Test<>::TestBody()
@ 0x703dc3 testing::internal::HandleExceptionsInMethodIfSupported <>()
@ 0x6faa07 testing::Test::Run()
@ 0x6faaae testing::TestInfo::Run()
@ 0x6fabb5 testing::TestCase::Run()
@ 0x6fdef8 testing::internal::UnitTestImpl::RunAllTests()
@ 0x6fe187 testing::UnitTest::Run()
@ 0x442eda main
@ 0x2b06c86ccf45 (unknown)
@ 0x447f69 (unknown)
@ (nil) (unknown)
make: *** [runtest] Aborted (core dumped)
$
`

What could be the error? Please help

@chen1474147
Copy link
Owner

Hi, I am not sure about this problem. I think it maybe gpu version. You know, I use an old caffe version. It works fine for cuda 7.0. I think you can try to compile a cpu version to see if there is any problems. Then you may try cuda 7.0.

@priyapaul
Copy link
Author

Tried compiling with cpu verison, this gives errors as shown,

ppl4hi@HI-Z0FH9:~/CODES/Deep3DPose-master/5-caffe$ make all
PROTOC src/caffe/proto/caffe.proto
CXX .build_release/src/caffe/proto/caffe.pb.cc
CXX src/caffe/util/benchmark.cpp
CXX src/caffe/util/io.cpp
CXX src/caffe/util/insert_splits.cpp
CXX src/caffe/util/im2col.cpp
CXX src/caffe/util/math_functions.cpp
CXX src/caffe/util/db.cpp
CXX src/caffe/util/cudnn.cpp
CXX src/caffe/util/upgrade_proto.cpp
CXX src/caffe/common.cpp
CXX src/caffe/net.cpp
CXX src/caffe/solver.cpp
CXX src/caffe/internal_thread.cpp
CXX src/caffe/syncedmem.cpp
CXX src/caffe/blob.cpp
CXX src/caffe/data_transformer.cpp
CXX src/caffe/layer_factory.cpp
CXX src/caffe/layers/slice_layer.cpp
CXX src/caffe/layers/memory_data_layer.cpp
CXX src/caffe/layers/power_layer.cpp
CXX src/caffe/layers/split_layer.cpp
CXX src/caffe/layers/silence_layer.cpp
CXX src/caffe/layers/image_data_layer.cpp
CXX src/caffe/layers/tanh_layer.cpp
CXX src/caffe/layers/hdf5_data_layer.cpp
CXX src/caffe/layers/cudnn_softmax_layer.cpp
CXX src/caffe/layers/neuron_layer.cpp
CXX src/caffe/layers/loss_layer.cpp
CXX src/caffe/layers/deconv_layer.cpp
CXX src/caffe/layers/threshold_layer.cpp
CXX src/caffe/layers/cudnn_relu_layer.cpp
CXX src/caffe/layers/contrastive_loss_layer.cpp
CXX src/caffe/layers/mvn_layer.cpp
CXX src/caffe/layers/cudnn_conv_layer.cpp
CXX src/caffe/layers/softmax_layer.cpp
CXX src/caffe/layers/sigmoid_layer.cpp
CXX src/caffe/layers/argmax_layer.cpp
CXX src/caffe/layers/infogain_loss_layer.cpp
CXX src/caffe/layers/base_conv_layer.cpp
CXX src/caffe/layers/sigmoid_cross_entropy_loss_layer.cpp
CXX src/caffe/layers/im2col_layer.cpp
CXX src/caffe/layers/dummy_data_layer.cpp
CXX src/caffe/layers/cudnn_tanh_layer.cpp
CXX src/caffe/layers/unpooling.cpp
In file included from ./include/caffe/common.hpp:19:0,
from src/caffe/layers/unpooling.cpp:5:
./include/caffe/util/device_alternate.hpp:14:15: error: expected initializer before ‘<’ token
void classname::Forward_gpu(const vector<Blob>& bottom,
^
src/caffe/layers/unpooling.cpp:223:1: note: in expansion of macro ‘STUB_GPU’
STUB_GPU(UnpoolingLayer);
^
./include/caffe/util/device_alternate.hpp:17:15: error: expected initializer before ‘<’ token
void classname::Backward_gpu(const vector<Blob
>& top,
^
src/caffe/layers/unpooling.cpp:223:1: note: in expansion of macro ‘STUB_GPU’
STUB_GPU(UnpoolingLayer);
^
make: *** [.build_release/src/caffe/layers/unpooling.o] Error 1

@chen1474147
Copy link
Owner

chen1474147 commented Jan 31, 2017

Sorry, I re-compiled in my computer, but it succeed...

please do not use cudnn. It works fine for both cuda or cpu version.

From the error, I think it is something wrong in my own unpooling layer. If you are familiar with caffe, I would recommend you to remove this layer from caffe. Then you can try to compile again.

But it do compiles successfully in my computer...

@priyapaul
Copy link
Author

priyapaul commented Jan 31, 2017

I am not using cudnn. This is my makefile.config. Is there anything to change?
note: The original caffe version compiles without problems on my system.

## Refer to http://caffe.berkeleyvision.org/installation.html
# Contributions simplifying and improving our build system are welcome!

# cuDNN acceleration switch (uncomment to build with cuDNN).
# USE_CUDNN := 1

# CPU-only switch (uncomment to build without GPU support).
# CPU_ONLY := 1

# To customize your choice of compiler, uncomment and set the following.
# N.B. the default for Linux is g++ and the default for OSX is clang++
# CUSTOM_CXX := g++

# CUDA directory contains bin/ and lib/ directories that we need.
CUDA_DIR := /usr/local/cuda
# On Ubuntu 14.04, if cuda tools are installed via
# "sudo apt-get install nvidia-cuda-toolkit" then use this instead:
# CUDA_DIR := /usr
# CUDA architecture setting: going with all of them.
# For CUDA < 6.0, comment the *_50 lines for compatibility.
CUDA_ARCH := -gencode arch=compute_20,code=sm_20 \
		-gencode arch=compute_20,code=sm_21 \
		-gencode arch=compute_30,code=sm_30 \
		-gencode arch=compute_35,code=sm_35 \
		-gencode arch=compute_50,code=sm_50 \
		-gencode arch=compute_50,code=compute_50

# BLAS choice:
# atlas for ATLAS (default)
# mkl for MKL
# open for OpenBlas
BLAS := atlas
# Custom (MKL/ATLAS/OpenBLAS) include and lib directories.
# Leave commented to accept the defaults for your choice of BLAS
# (which should work)!
# BLAS_INCLUDE := /path/to/your/blas
# BLAS_LIB := /path/to/your/blas

# This is required only if you will compile the matlab interface.
# MATLAB directory should contain the mex binary in /bin.
# MATLAB_DIR := /usr/local
# MATLAB_DIR := /Applications/MATLAB_R2012b.app

# NOTE: this is required only if you will compile the python interface.
# We need to be able to find Python.h and numpy/arrayobject.h.
PYTHON_INCLUDE := /usr/include/python2.7 \
		/usr/lib/python2.7/dist-packages/numpy/core/include
# Anaconda Python distribution is quite popular. Include path:
# Verify anaconda location, sometimes it's in root.
# ANACONDA_HOME := $(HOME)/anaconda
# PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
		# $(ANACONDA_HOME)/include/python2.7 \
		# $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include \

# We need to be able to find libpythonX.X.so or .dylib.
PYTHON_LIB := /usr/lib
# PYTHON_LIB := $(ANACONDA_HOME)/lib

# Uncomment to support layers written in Python (will link against Python libs)
# WITH_PYTHON_LAYER := 1

# Whatever else you find you need goes here.
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib

# Uncomment to use `pkg-config` to specify OpenCV library paths.
# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
# USE_PKG_CONFIG := 1

BUILD_DIR := build
DISTRIBUTE_DIR := distribute

# Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171
# DEBUG := 1

# The ID of the GPU that 'make runtest' will use to run unit tests.
TEST_GPUID := 0

# enable pretty build (comment to see full commands)
Q ?= @

@priyapaul
Copy link
Author

I couldn't fix it. I stopped at make test, did not do make runtest! bu there are other errors still!

@chen1474147
Copy link
Owner

You mean you can make all and succeed, but you failed when you make test? I can make test succeed...
when you run make runtest, it should show

AdaGradSolverTest/0.TestAdaGradLeastSquaresUpdateWithWeightDecay
F0201 23:46:06.387573 16190 solver.cpp:60] No grl and fc interval parameters!
*** Check failure stack trace: ***
@ 0x2b8fb09c6daa (unknown)
@ 0x2b8fb09c6ce4 (unknown)
@ 0x2b8fb09c66e6 (unknown)
@ 0x2b8fb09c9687 (unknown)
@ 0x2b8fb1b54ceb caffe::Solver<>::Init()
@ 0x2b8fb1b54e56 caffe::Solver<>::Solver()
@ 0x4cc806 caffe::AdaGradSolverTest<>::InitSolver()
@ 0x4ccd7b caffe::GradientBasedSolverTest<>::InitSolverFromProtoString()
@ 0x4c0889 caffe::GradientBasedSolverTest<>::RunLeastSquaresSolver()
@ 0x4caa86 caffe::AdaGradSolverTest_TestAdaGradLeastSquaresUpdateWithWeightDecay_Test<>::TestBody()
@ 0x705043 testing::internal::HandleExceptionsInMethodIfSupported<>()
@ 0x6fbc87 testing::Test::Run()
@ 0x6fbd2e testing::TestInfo::Run()
@ 0x6fbe35 testing::TestCase::Run()
@ 0x6ff178 testing::internal::UnitTestImpl::RunAllTests()
@ 0x6ff407 testing::UnitTest::Run()
@ 0x442f2a main

This is because I assert you need to set the two parameters.

@priyapaul
Copy link
Author

priyapaul commented Feb 1, 2017

Thank you for your reply. yes, I can do
make all -SUCCESS
make test -SUCCESS
make runtest - FAILED, with the same error you mentioned.
So I suppose, its okay, not to run the make runtest initially. But, only after the parameters are set
Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants