Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PoolingLayerTest/0.TestForwardMaxTopMask failed when i run test_caffe_main in unit_tests #35

Open
jeremycheong opened this issue Mar 22, 2018 · 2 comments

Comments

@jeremycheong
Copy link

I am working on firefly3399(RK3399), I refer to the User Quick Guide and everything is OK, but when I am doing the unit test which is the last step. something is wrong and i checked the source code, try to fix it. But nothing can be changed.

[==========] Running 29 tests from 6 test cases.
[----------] Global test environment set-up.
setting up testbed resource
[----------] 8 tests from PoolingLayerTest/0, where TypeParam = caffe::CPUDevice<float>
[ RUN      ] PoolingLayerTest/0.TestSetup
[       OK ] PoolingLayerTest/0.TestSetup (6 ms)
[ RUN      ] PoolingLayerTest/0.TestSetupPadded
[       OK ] PoolingLayerTest/0.TestSetupPadded (0 ms)
[ RUN      ] PoolingLayerTest/0.TestSetupGlobalPooling
[       OK ] PoolingLayerTest/0.TestSetupGlobalPooling (0 ms)
[ RUN      ] PoolingLayerTest/0.TestForwardMax
[       OK ] PoolingLayerTest/0.TestForwardMax (11 ms)
[ RUN      ] PoolingLayerTest/0.TestForwardMaxTopMask
test_pooling_layer.cpp:124: Failure
Expected equality of these values:
  blob_top_mask_->cpu_data()[i + 0]
    Which is: 0
  5
test_pooling_layer.cpp:125: Failure
Expected equality of these values:
  blob_top_mask_->cpu_data()[i + 1]
    Which is: 0
  2
test_pooling_layer.cpp:126: Failure
Expected equality of these values:
  blob_top_mask_->cpu_data()[i + 2]
    Which is: 0
  2
test_pooling_layer.cpp:127: Failure
Expected equality of these values:
  blob_top_mask_->cpu_data()[i + 3]
    Which is: 0
  9
test_pooling_layer.cpp:128: Failure
Expected equality of these values:
  blob_top_mask_->cpu_data()[i + 4]
    Which is: 0
  5
test_pooling_layer.cpp:129: Failure
Expected equality of these values:
  blob_top_mask_->cpu_data()[i + 5]
    Which is: 0
  12
test_pooling_layer.cpp:130: Failure
Expected equality of these values:
  blob_top_mask_->cpu_data()[i + 6]
    Which is: 0
  12
test_pooling_layer.cpp:131: Failure
Expected equality of these values:
  blob_top_mask_->cpu_data()[i + 7]
    Which is: 0
  9
test_pooling_layer.cpp:124: Failure
Expected equality of these values:
  blob_top_mask_->cpu_data()[i + 0]
    Which is: 0
  5
test_pooling_layer.cpp:125: Failure
Expected equality of these values:
  blob_top_mask_->cpu_data()[i + 1]
    Which is: 0
  2
test_pooling_layer.cpp:126: Failure
Expected equality of these values:
  blob_top_mask_->cpu_data()[i + 2]
    Which is: 0
  2
test_pooling_layer.cpp:127: Failure
Expected equality of these values:
  blob_top_mask_->cpu_data()[i + 3]
    Which is: 0
  9
test_pooling_layer.cpp:128: Failure
Expected equality of these values:
  blob_top_mask_->cpu_data()[i + 4]
    Which is: 0
  5
test_pooling_layer.cpp:129: Failure
Expected equality of these values:
  blob_top_mask_->cpu_data()[i + 5]
    Which is: 0
  12
test_pooling_layer.cpp:130: Failure
Expected equality of these values:
  blob_top_mask_->cpu_data()[i + 6]
    Which is: 0
  12
test_pooling_layer.cpp:131: Failure
Expected equality of these values:
  blob_top_mask_->cpu_data()[i + 7]
    Which is: 0
  9
test_pooling_layer.cpp:124: Failure
Expected equality of these values:
  blob_top_mask_->cpu_data()[i + 0]
    Which is: 0
  5
test_pooling_layer.cpp:125: Failure
Expected equality of these values:
  blob_top_mask_->cpu_data()[i + 1]
    Which is: 0
  2
test_pooling_layer.cpp:126: Failure
Expected equality of these values:
  blob_top_mask_->cpu_data()[i + 2]
    Which is: 0
  2
test_pooling_layer.cpp:127: Failure
Expected equality of these values:
  blob_top_mask_->cpu_data()[i + 3]
    Which is: 0
  9
test_pooling_layer.cpp:128: Failure
Expected equality of these values:
  blob_top_mask_->cpu_data()[i + 4]
    Which is: 0
  5
test_pooling_layer.cpp:129: Failure
Expected equality of these values:
  blob_top_mask_->cpu_data()[i + 5]
    Which is: 0
  12
test_pooling_layer.cpp:130: Failure
Expected equality of these values:
  blob_top_mask_->cpu_data()[i + 6]
    Which is: 0
  12
test_pooling_layer.cpp:131: Failure
Expected equality of these values:
  blob_top_mask_->cpu_data()[i + 7]
    Which is: 0
  9
test_pooling_layer.cpp:124: Failure
Expected equality of these values:
  blob_top_mask_->cpu_data()[i + 0]
    Which is: 0
  5
test_pooling_layer.cpp:125: Failure
Expected equality of these values:
  blob_top_mask_->cpu_data()[i + 1]
    Which is: 0
  2
test_pooling_layer.cpp:126: Failure
Expected equality of these values:
  blob_top_mask_->cpu_data()[i + 2]
    Which is: 0
  2
test_pooling_layer.cpp:127: Failure
Expected equality of these values:
  blob_top_mask_->cpu_data()[i + 3]
    Which is: 0
  9
test_pooling_layer.cpp:128: Failure
Expected equality of these values:
  blob_top_mask_->cpu_data()[i + 4]
    Which is: 0
  5
test_pooling_layer.cpp:129: Failure
Expected equality of these values:
  blob_top_mask_->cpu_data()[i + 5]
    Which is: 0
  12
test_pooling_layer.cpp:130: Failure
Expected equality of these values:
  blob_top_mask_->cpu_data()[i + 6]
    Which is: 0
  12
test_pooling_layer.cpp:131: Failure
Expected equality of these values:
  blob_top_mask_->cpu_data()[i + 7]
    Which is: 0
  9
[  FAILED  ] PoolingLayerTest/0.TestForwardMaxTopMask, where TypeParam = caffe::CPUDevice<float> (10 ms)
[ RUN      ] PoolingLayerTest/0.TestForwardMaxPadded
[       OK ] PoolingLayerTest/0.TestForwardMaxPadded (0 ms)
[ RUN      ] PoolingLayerTest/0.TestMax
[       OK ] PoolingLayerTest/0.TestMax (1095 ms)
[ RUN      ] PoolingLayerTest/0.TestForwardAve
[       OK ] PoolingLayerTest/0.TestForwardAve (1 ms)
[----------] 8 tests from PoolingLayerTest/0 (1123 ms total)

[----------] 1 test from SoftmaxLayerTest/0, where TypeParam = caffe::CPUDevice<float>
[ RUN      ] SoftmaxLayerTest/0.TestForward
[       OK ] SoftmaxLayerTest/0.TestForward (0 ms)
[----------] 1 test from SoftmaxLayerTest/0 (0 ms total)

[----------] 6 tests from InnerProductLayerTest/0, where TypeParam = caffe::CPUDevice<float>
[ RUN      ] InnerProductLayerTest/0.TestSetUp
[       OK ] InnerProductLayerTest/0.TestSetUp (0 ms)
[ RUN      ] InnerProductLayerTest/0.TestSetUpTransposeFalse
[       OK ] InnerProductLayerTest/0.TestSetUpTransposeFalse (1 ms)
[ RUN      ] InnerProductLayerTest/0.TestSetUpTransposeTrue
[       OK ] InnerProductLayerTest/0.TestSetUpTransposeTrue (0 ms)
[ RUN      ] InnerProductLayerTest/0.TestForward
[       OK ] InnerProductLayerTest/0.TestForward (0 ms)
[ RUN      ] InnerProductLayerTest/0.TestForwardTranspose
[       OK ] InnerProductLayerTest/0.TestForwardTranspose (1 ms)
[ RUN      ] InnerProductLayerTest/0.TestForwardNoBatch
[       OK ] InnerProductLayerTest/0.TestForwardNoBatch (1 ms)
[----------] 6 tests from InnerProductLayerTest/0 (3 ms total)

[----------] 6 tests from NeuronLayerTest/0, where TypeParam = caffe::CPUDevice<float>
[ RUN      ] NeuronLayerTest/0.TestAbsVal
[       OK ] NeuronLayerTest/0.TestAbsVal (0 ms)
[ RUN      ] NeuronLayerTest/0.TestReLU
[       OK ] NeuronLayerTest/0.TestReLU (0 ms)
[ RUN      ] NeuronLayerTest/0.TestReLUWithNegativeSlope
[       OK ] NeuronLayerTest/0.TestReLUWithNegativeSlope (15 ms)
[ RUN      ] NeuronLayerTest/0.TestSigmoid
[       OK ] NeuronLayerTest/0.TestSigmoid (0 ms)
[ RUN      ] NeuronLayerTest/0.TestTanH
[       OK ] NeuronLayerTest/0.TestTanH (1 ms)
[ RUN      ] NeuronLayerTest/0.TestBNLL
[       OK ] NeuronLayerTest/0.TestBNLL (0 ms)
[----------] 6 tests from NeuronLayerTest/0 (16 ms total)

[----------] 5 tests from LRNLayerTest/0, where TypeParam = caffe::CPUDevice<float>
[ RUN      ] LRNLayerTest/0.TestSetupAcrossChannels
[       OK ] LRNLayerTest/0.TestSetupAcrossChannels (0 ms)
[ RUN      ] LRNLayerTest/0.TestForwardAcrossChannels
[       OK ] LRNLayerTest/0.TestForwardAcrossChannels (2 ms)
[ RUN      ] LRNLayerTest/0.TestForwardAcrossChannelsLargeRegion
[       OK ] LRNLayerTest/0.TestForwardAcrossChannelsLargeRegion (5 ms)
[ RUN      ] LRNLayerTest/0.TestSetupWithinChannel
[       OK ] LRNLayerTest/0.TestSetupWithinChannel (0 ms)
[ RUN      ] LRNLayerTest/0.TestForwardWithinChannel
[       OK ] LRNLayerTest/0.TestForwardWithinChannel (2 ms)
[----------] 5 tests from LRNLayerTest/0 (9 ms total)

[----------] 3 tests from ConvolutionLayerTest/0, where TypeParam = caffe::CPUDevice<float>
[ RUN      ] ConvolutionLayerTest/0.TestSetup
[       OK ] ConvolutionLayerTest/0.TestSetup (1 ms)
[ RUN      ] ConvolutionLayerTest/0.TestSimpleConvolution
[       OK ] ConvolutionLayerTest/0.TestSimpleConvolution (4 ms)
[ RUN      ] ConvolutionLayerTest/0.Test1x1Convolution
[       OK ] ConvolutionLayerTest/0.Test1x1Convolution (3 ms)
[----------] 3 tests from ConvolutionLayerTest/0 (8 ms total)

[----------] Global test environment tear-down
release testbed resource
[==========] 29 tests from 6 test cases ran. (1159 ms total)
[  PASSED  ] 28 tests.
[  FAILED  ] 1 test, listed below:
[  FAILED  ] PoolingLayerTest/0.TestForwardMaxTopMask, where TypeParam = caffe::CPUDevice<float>

 1 FAILED TEST

My system configuration

Operating system: xunbutu16.04 (aarch64)
Compiler: firefly3399(RK3399)
CUDA version (if applicable): no
CUDNN version (if applicable): no
BLAS: openBLAS(use OPENMP) re-compiled

@Hurmean
Copy link

Hurmean commented Apr 2, 2018

I meet the same problem! Did you solve it?

@joey2014
Copy link

joey2014 commented Apr 10, 2018

The reason is ACL didn't support pooling top mask, we have to fall back the test to Caffe:
diff --git a/src/caffe/layers/acl_pooling_layer.cpp b/src/caffe/layers/acl_pooling_layer.cpp
index f62fb5d..c7951d7 100644
--- a/src/caffe/layers/acl_pooling_layer.cpp
+++ b/src/caffe/layers/acl_pooling_layer.cpp
@@ -43,6 +43,7 @@ template
bool ACLPoolingLayer::Bypass_acl(const vector<Blob>& bottom,
const vector<Blob
>& top){
bool bypass_acl=false;
+ bool use_top_mask = top.size() > 1;
if (this->force_bypass_acl_path_|| this->layer_param_.pooling_param().global_pooling()) {
bypass_acl=true;
}
@@ -56,6 +57,9 @@ bool ACLPoolingLayer::Bypass_acl(const vector<Blob*>& bottom,
if (this->kernel_h_!=2 && this->kernel_h_!=3) {
bypass_acl=true;
}
+ if (use_top_mask){
+ bypass_acl=true;
+ }
return bypass_acl;
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants