Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.

build error:error: command '/usr/local/cuda/bin/nvcc' failed with exit status 1 #25

Closed
gf19880710 opened this issue Oct 26, 2018 · 44 comments

Comments

@gf19880710
Copy link

❓ Questions and Help

Hello Great guys,
When I tried to install maksrcnn-benchmark in my PC, I got the follow compilation error in the last step, anyone can have a look? Thanks

(maskrcnn_benchmark) gengfeng@ai-work-4:~/github/maskrcnn-benchmark$ python setup.py build develop
running build
running build_ext
building 'maskrcnn_benchmark._C' extension
gcc -pthread -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -I/usr/local/cuda/include -I/home/gengfeng/anaconda3/envs/maskrcnn_benchmark/include/python3.6m -c /home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc/vision.cpp -o build/temp.linux-x86_64-3.6/home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc/vision.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
gcc -pthread -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -I/usr/local/cuda/include -I/home/gengfeng/anaconda3/envs/maskrcnn_benchmark/include/python3.6m -c /home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc/cpu/nms_cpu.cpp -o build/temp.linux-x86_64-3.6/home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc/cpu/nms_cpu.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
gcc -pthread -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -I/usr/local/cuda/include -I/home/gengfeng/anaconda3/envs/maskrcnn_benchmark/include/python3.6m -c /home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc/cpu/ROIAlign_cpu.cpp -o build/temp.linux-x86_64-3.6/home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc/cpu/ROIAlign_cpu.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
/usr/local/cuda/bin/nvcc -DWITH_CUDA -I/home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -I/usr/local/cuda/include -I/home/gengfeng/anaconda3/envs/maskrcnn_benchmark/include/python3.6m -c /home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc/cuda/ROIAlign_cuda.cu -o build/temp.linux-x86_64-3.6/home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc/cuda/ROIAlign_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:626:248:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> = <missing>]’
/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/ATen/core/TensorMethods.h:453:36:   required from here
/usr/include/c++/6/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’
       return __and_<is_constructible<_Elements, _UElements&&>...>::value;
                                                                   ^~~~~
/usr/include/c++/6/tuple:484:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:626:362:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> = <missing>]’
/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/ATen/core/TensorMethods.h:453:36:   required from here
/usr/include/c++/6/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’
       return __and_<is_convertible<_UElements&&, _Elements>...>::value;
                                                                 ^~~~~
/usr/include/c++/6/tuple:490:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor>&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:662:419:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(const std::tuple<_Args1 ...>&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> = <missing>]’
/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/ATen/core/TensorMethods.h:453:36:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (4, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor>&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor>&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:686:422:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(std::tuple<_Args1 ...>&&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> = <missing>]’
/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/ATen/core/TensorMethods.h:453:36:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (4, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor>&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>}; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’:
/usr/include/c++/6/tuple:626:248:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor&, at::Tensor&, at::Tensor&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor&, at::Tensor&, at::Tensor&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> = <missing>]’
/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:1960:69:   required from here
/usr/include/c++/6/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’
       return __and_<is_constructible<_Elements, _UElements&&>...>::value;
                                                                   ^~~~~
/usr/include/c++/6/tuple:484:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>}; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>}; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’:
/usr/include/c++/6/tuple:626:362:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor&, at::Tensor&, at::Tensor&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor&, at::Tensor&, at::Tensor&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> = <missing>]’
/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:1960:69:   required from here
/usr/include/c++/6/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’
       return __and_<is_convertible<_UElements&&, _Elements>...>::value;
                                                                 ^~~~~
/usr/include/c++/6/tuple:490:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>}; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>&; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’:
/usr/include/c++/6/tuple:662:419:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor&, at::Tensor&, at::Tensor&>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(const std::tuple<_Args1 ...>&) [with _UElements = {at::Tensor&, at::Tensor&, at::Tensor&}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor&, at::Tensor&, at::Tensor&>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> = <missing>]’
/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:1960:69:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (4, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>&; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>&&; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’:
/usr/include/c++/6/tuple:686:422:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor&, at::Tensor&, at::Tensor&>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(std::tuple<_Args1 ...>&&) [with _UElements = {at::Tensor&, at::Tensor&, at::Tensor&}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor&, at::Tensor&, at::Tensor&>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> = <missing>]’
/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:1960:69:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (4, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>&&; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:626:248:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (5ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (5ul >= 1)), bool>::type <anonymous> = <missing>]’
/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3040:197:   required from here
/usr/include/c++/6/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’
       return __and_<is_constructible<_Elements, _UElements&&>...>::value;
                                                                   ^~~~~
/usr/include/c++/6/tuple:484:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:626:362:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (5ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (5ul >= 1)), bool>::type <anonymous> = <missing>]’
/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3040:197:   required from here
/usr/include/c++/6/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’
       return __and_<is_convertible<_UElements&&, _Elements>...>::value;
                                                                 ^~~~~
/usr/include/c++/6/tuple:490:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:662:419:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(const std::tuple<_Args1 ...>&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> = <missing>]’
/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3040:197:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (6, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:686:422:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(std::tuple<_Args1 ...>&&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> = <missing>]’
/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3040:197:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (6, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}]’:
/usr/include/c++/6/tuple:626:248:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type <anonymous> = <missing>]’
/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3043:267:   required from here
/usr/include/c++/6/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’
       return __and_<is_constructible<_Elements, _UElements&&>...>::value;
                                                                   ^~~~~
/usr/include/c++/6/tuple:484:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}]’:
/usr/include/c++/6/tuple:626:362:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type <anonymous> = <missing>]’
/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3043:267:   required from here
/usr/include/c++/6/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’
       return __and_<is_convertible<_UElements&&, _Elements>...>::value;
                                                                 ^~~~~
/usr/include/c++/6/tuple:490:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}]’:
/usr/include/c++/6/tuple:662:419:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(const std::tuple<_Args1 ...>&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> = <missing>]’
/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3043:267:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (5, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}]’:
/usr/include/c++/6/tuple:686:422:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(std::tuple<_Args1 ...>&&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> = <missing>]’
/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3043:267:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (5, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:626:248:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type <anonymous> = <missing>]’
/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3509:107:   required from here
/usr/include/c++/6/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’
       return __and_<is_constructible<_Elements, _UElements&&>...>::value;
                                                                   ^~~~~
/usr/include/c++/6/tuple:484:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:626:362:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type <anonymous> = <missing>]’
/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3509:107:   required from here
/usr/include/c++/6/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’
       return __and_<is_convertible<_UElements&&, _Elements>...>::value;
                                                                 ^~~~~
/usr/include/c++/6/tuple:490:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:662:419:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(const std::tuple<_Args1 ...>&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> = <missing>]’
/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3509:107:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (5, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:686:422:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(std::tuple<_Args1 ...>&&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> = <missing>]’
/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3509:107:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (5, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
error: command '/usr/local/cuda/bin/nvcc' failed with exit status 1
(maskrcnn_benchmark) gengfeng@ai-work-4:~/github/maskrcnn-benchmark$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2017 NVIDIA Corporation
Built on Fri_Sep__1_21:08:03_CDT_2017
Cuda compilation tools, release 9.0, V9.0.176
@fmassa
Copy link
Contributor

fmassa commented Oct 26, 2018

Hi,

I'll look into this. Could you please post the PyTorch version that you used?

@gf19880710
Copy link
Author

@fmassa
Thank you, kindly find bellow infos:

(maskrcnn_benchmark) gengfeng@ai-work-4:~/github/maskrcnn-benchmark$ conda list
# packages in environment at /home/gengfeng/anaconda3/envs/maskrcnn_benchmark:
#
blas                      1.0                         mkl    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
certifi                   2016.2.28                py36_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
cffi                      1.10.0                   py36_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
decorator                 4.1.2                    py36_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
intel-openmp              2019.0                      118    defaults
ipython                   6.1.0                    py36_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
ipython_genutils          0.2.0                    py36_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
jedi                      0.10.2                   py36_2    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
libffi                    3.2.1                         1    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
libgcc-ng                 8.2.0                hdf63c60_1    defaults
libgfortran-ng            7.3.0                hdf63c60_0    defaults
libstdcxx-ng              8.2.0                hdf63c60_1    defaults
mkl                       2019.0                      118    defaults
mkl_fft                   1.0.6            py36h7dd41cf_0    defaults
mkl_random                1.0.1            py36h4414c95_1    defaults
ninja                     1.7.2                         0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
numpy                     1.15.3           py36h1d66e8a_0    defaults
numpy-base                1.15.3           py36h81de0dd_0    defaults
openssl                   1.0.2l                        0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
path.py                   10.3.1                   py36_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
pexpect                   4.2.1                    py36_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
pickleshare               0.7.4                    py36_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
pip                       9.0.1                    py36_1    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
prompt_toolkit            1.0.15                   py36_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
ptyprocess                0.5.2                    py36_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
pycocotools               2.0                       <pip>
pycparser                 2.18                     py36_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
pygments                  2.2.0                    py36_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
python                    3.6.2                         0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
pytorch-nightly           1.0.0.dev20181025 py3.6_cuda9.0.176_cudnn7.1.2_0    pytorch
readline                  6.2                           2    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
setuptools                36.4.0                   py36_1    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
simplegeneric             0.8.1                    py36_1    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
six                       1.10.0                   py36_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
sqlite                    3.13.0                        0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
tk                        8.5.18                        0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
tqdm                      4.28.1                    <pip>
traitlets                 4.3.2                    py36_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
wcwidth                   0.1.7                    py36_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
wheel                     0.29.0                   py36_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
xz                        5.2.3                         0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
zlib                      1.2.11                        0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free

@fmassa
Copy link
Contributor

fmassa commented Oct 26, 2018

This is very weird. I've pulled the same version of the pytorch build as you, I have the same nvcc version but compilation went fine. I'm investigating a bit further, but I wonder if it might be related to facebookarchive/caffe2#1898

Can you try doing something like

CUDA_HOST_COMPILER=/usr/bin/gcc-5 python setup.py build develop

@fmassa
Copy link
Contributor

fmassa commented Oct 26, 2018

Oh, wait, what's your gcc version?

It seems that CUDA 9.0 doesn't support gcc 6.4.0

@gf19880710
Copy link
Author

Hi, my gcc version is 6.4.0!

(maskrcnn_benchmark) gengfeng@ai-work-4:~/github/maskrcnn-benchmark$ gcc --version
gcc (Ubuntu 6.4.0-17ubuntu1) 6.4.0 20180424
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

so, which gcc version I should use?

@fmassa
Copy link
Contributor

fmassa commented Oct 26, 2018

I'm using gcc 5.4.0 and it works fine here.

@lucasjinreal
Copy link

@fmassa I think CUDA9.0 just support gcc6 but not gcc7 which is default in Ubuntu18.04. I got this error to but no clue why

@lucasjinreal
Copy link

Currently my gcc version is 6.4

@fmassa
Copy link
Contributor

fmassa commented Oct 26, 2018

@jinfagang according to the list I posted, CUDA 9.0 doesn't support gcc 6.4 either.
We use FindCUDA internally, so maybe if you specify CUDA_HOST_COMPILER to be a lower gcc version that should work??
I'd start trying to see if passing as an environment variable would work

@gf19880710
Copy link
Author

@fmassa ,
Let me try 5.4.0 and gave you feedback later.
@jinfagang I use also Ubuntu18.04 which has gcc6 and gcc7(default).

@lucasjinreal
Copy link

@fmassa Using gcc5 by how?

@fmassa
Copy link
Contributor

fmassa commented Oct 26, 2018

@gf19880710 if the environment flag doesn't work, you can add a line in setup.py to add the CUDA_HOST_COMPILER flag that I mentioned earlier.
Something like

'-DCUDA_HOST_COMPILER=/usr/bin/gcc5'

or the path to your gcc5

@fmassa fmassa mentioned this issue Oct 26, 2018
@gf19880710
Copy link
Author

@fmassa OK, feedback to you later.

@lucasjinreal
Copy link

@fmassa I add that line to setup.py, but still got this:

maskrcnn-benchmark/env/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3517:107:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (5, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
error: command '/usr/local/cuda/bin/nvcc' failed with exit status 1

if torch.cuda.is_available() and CUDA_HOME is not None:
        extension = CUDAExtension
        sources += source_cuda
        define_macros += [("WITH_CUDA", None)]
        extra_compile_args["nvcc"] = [
            "-DCUDA_HAS_FP16=1",
            "-D__CUDA_NO_HALF_OPERATORS__",
            "-DCUDA_HOST_COMPILER=/usr/bin/gcc-5",
            "-D__CUDA_NO_HALF_CONVERSIONS__",
            "-D__CUDA_NO_HALF2_OPERATORS__",
        ]

@fmassa
Copy link
Contributor

fmassa commented Oct 26, 2018

Just to double check, could you rm -rd build/ and try again?

@lucasjinreal
Copy link

lucasjinreal commented Oct 26, 2018

@fmassa rm build folder does'nt help, I tried just now. might be any other issue? I can see it still include some functions from /usr/include/c++/6 ...

@fmassa
Copy link
Contributor

fmassa commented Oct 26, 2018

@jinfagang could you try rm -rf build/, and try again with the following command:

CUDAHOSTCXX=/usr/bin/gcc-5 python setup.py build develop

?
I just need to find the right way of doing it so that it get's picked by CMake

@gf19880710
Copy link
Author

@fmassa
Hi, sorry for the delay. When I was trying to install GCC-5.4.0 by compiling gcc-5.4.0.tar.gz file, i always got some issues, so I decided to use another way to install gcc-5:
sudo apt-get install gcc-5
Then I got gcc-5.5.0 finally.
I did one softlink for gcc5.5.0
sudo rm /usr/bin/gcc
sudo ln -s /usr/bin/gcc-5 /usr/bin/gcc

(maskrcnn_benchmark) gengfeng@ai-work-4:~/github/maskrcnn-benchmark$ gcc --version
gcc (Ubuntu 5.5.0-12ubuntu1) 5.5.0 20171010
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Then I did the operation according to your advice:
add -DCUDA_HOST_COMPILER=/usr/bin/gcc-5 to setup.py file

    if torch.cuda.is_available() and CUDA_HOME is not None:
        extension = CUDAExtension
        sources += source_cuda
        define_macros += [("WITH_CUDA", None)]
        extra_compile_args["nvcc"] = [
            "-DCUDA_HAS_FP16=1",
            "-D__CUDA_NO_HALF_OPERATORS__",
            "-D__CUDA_NO_HALF_CONVERSIONS__",
            "-D__CUDA_NO_HALF2_OPERATORS__",
            "-DCUDA_HOST_COMPILER=/usr/bin/gcc-5"
        ]

rm -rd build/

(maskrcnn_benchmark) gengfeng@ai-work-4:~/github/maskrcnn-benchmark$ gcc --version
gcc (Ubuntu 5.5.0-12ubuntu1) 5.5.0 20171010
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

(maskrcnn_benchmark) gengfeng@ai-work-4:~/github/maskrcnn-benchmark$ rm -rf build/
(maskrcnn_benchmark) gengfeng@ai-work-4:~/github/maskrcnn-benchmark$ ls
ABSTRACTIONS.md  CODE_OF_CONDUCT.md  configs  CONTRIBUTING.md  demo  INSTALL.md  LICENSE  maskrcnn_benchmark  maskrcnn_benchmark.egg-info  MODEL_ZOO.md  README.md  setup.py  tests  tools
(maskrcnn_benchmark) gengfeng@ai-work-4:~/github/maskrcnn-benchmark$ CUDAHOSTCXX=/usr/bin/gcc-5 python setup.py build develop
running build
running build_ext
building 'maskrcnn_benchmark._C' extension
creating build
creating build/temp.linux-x86_64-3.6
creating build/temp.linux-x86_64-3.6/home
creating build/temp.linux-x86_64-3.6/home/gengfeng
creating build/temp.linux-x86_64-3.6/home/gengfeng/github
creating build/temp.linux-x86_64-3.6/home/gengfeng/github/maskrcnn-benchmark
creating build/temp.linux-x86_64-3.6/home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark
creating build/temp.linux-x86_64-3.6/home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc
creating build/temp.linux-x86_64-3.6/home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc/cpu
creating build/temp.linux-x86_64-3.6/home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc/cuda
gcc -pthread -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -I/usr/local/cuda/include -I/home/gengfeng/anaconda3/envs/maskrcnn_benchmark/include/python3.6m -c /home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc/vision.cpp -o build/temp.linux-x86_64-3.6/home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc/vision.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
gcc: error trying to exec 'cc1plus': execvp: no such file or directory
error: command 'gcc' failed with exit status 1

Still failed but gcc failed this time.

@fmassa
Copy link
Contributor

fmassa commented Oct 26, 2018

@gf19880710 from looking around in the internet, seems that your gcc 5 is broken https://stackoverflow.com/questions/41554900/gcc-and-g-error-error-trying-to-exec-cc1plus-execvp-no-such-file-or-direc

What's your OS? There might be some OS-specific information online that we could use to try to make this work out.

@gf19880710
Copy link
Author

gf19880710 commented Oct 26, 2018

@fmassa
Sorry again great man, that's my mistake, I forgot to do the softlink for g++!!! My OS is Ubuntu18.04.

gengfeng@ai-work-4:/usr/local/cuda-9.0/bin$ sudo ln -s /usr/bin/g++-5 g++
gengfeng@ai-work-4:/usr/local/cuda-9.0/bin$ g++ --version
g++ (Ubuntu 5.5.0-12ubuntu1) 5.5.0 20171010
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

after this I think I was successful!

(maskrcnn_benchmark) gengfeng@ai-work-4:~/github/maskrcnn-benchmark$ g++ --version
g++ (Ubuntu 5.5.0-12ubuntu1) 5.5.0 20171010
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

(maskrcnn_benchmark) gengfeng@ai-work-4:~/github/maskrcnn-benchmark$ rm -rf build/
(maskrcnn_benchmark) gengfeng@ai-work-4:~/github/maskrcnn-benchmark$ CUDAHOSTCXX=/usr/bin/gcc-5 python setup.py build develop
running build
running build_ext
building 'maskrcnn_benchmark._C' extension
creating build
creating build/temp.linux-x86_64-3.6
creating build/temp.linux-x86_64-3.6/home
creating build/temp.linux-x86_64-3.6/home/gengfeng
creating build/temp.linux-x86_64-3.6/home/gengfeng/github
creating build/temp.linux-x86_64-3.6/home/gengfeng/github/maskrcnn-benchmark
creating build/temp.linux-x86_64-3.6/home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark
creating build/temp.linux-x86_64-3.6/home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc
creating build/temp.linux-x86_64-3.6/home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc/cpu
creating build/temp.linux-x86_64-3.6/home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc/cuda
gcc -pthread -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -I/usr/local/cuda/include -I/home/gengfeng/anaconda3/envs/maskrcnn_benchmark/include/python3.6m -c /home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc/vision.cpp -o build/temp.linux-x86_64-3.6/home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc/vision.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
gcc -pthread -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -I/usr/local/cuda/include -I/home/gengfeng/anaconda3/envs/maskrcnn_benchmark/include/python3.6m -c /home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc/cpu/nms_cpu.cpp -o build/temp.linux-x86_64-3.6/home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc/cpu/nms_cpu.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
gcc -pthread -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -I/usr/local/cuda/include -I/home/gengfeng/anaconda3/envs/maskrcnn_benchmark/include/python3.6m -c /home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc/cpu/ROIAlign_cpu.cpp -o build/temp.linux-x86_64-3.6/home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc/cpu/ROIAlign_cpu.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
/usr/local/cuda/bin/nvcc -DWITH_CUDA -I/home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -I/usr/local/cuda/include -I/home/gengfeng/anaconda3/envs/maskrcnn_benchmark/include/python3.6m -c /home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc/cuda/ROIAlign_cuda.cu -o build/temp.linux-x86_64-3.6/home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc/cuda/ROIAlign_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -DCUDA_HOST_COMPILER=/usr/bin/gcc-5 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
/usr/local/cuda/bin/nvcc -DWITH_CUDA -I/home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -I/usr/local/cuda/include -I/home/gengfeng/anaconda3/envs/maskrcnn_benchmark/include/python3.6m -c /home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc/cuda/nms.cu -o build/temp.linux-x86_64-3.6/home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc/cuda/nms.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -DCUDA_HOST_COMPILER=/usr/bin/gcc-5 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
/usr/local/cuda/bin/nvcc -DWITH_CUDA -I/home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -I/home/gengfeng/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -I/usr/local/cuda/include -I/home/gengfeng/anaconda3/envs/maskrcnn_benchmark/include/python3.6m -c /home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc/cuda/ROIPool_cuda.cu -o build/temp.linux-x86_64-3.6/home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc/cuda/ROIPool_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -DCUDA_HOST_COMPILER=/usr/bin/gcc-5 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
creating build/lib.linux-x86_64-3.6
creating build/lib.linux-x86_64-3.6/maskrcnn_benchmark
g++ -pthread -shared -L/home/gengfeng/anaconda3/envs/maskrcnn_benchmark/lib -Wl,-rpath=/home/gengfeng/anaconda3/envs/maskrcnn_benchmark/lib,--no-as-needed build/temp.linux-x86_64-3.6/home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc/vision.o build/temp.linux-x86_64-3.6/home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc/cpu/nms_cpu.o build/temp.linux-x86_64-3.6/home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc/cpu/ROIAlign_cpu.o build/temp.linux-x86_64-3.6/home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc/cuda/ROIAlign_cuda.o build/temp.linux-x86_64-3.6/home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc/cuda/nms.o build/temp.linux-x86_64-3.6/home/gengfeng/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc/cuda/ROIPool_cuda.o -L/usr/local/cuda/lib64 -L/home/gengfeng/anaconda3/envs/maskrcnn_benchmark/lib -lcudart -lpython3.6m -o build/lib.linux-x86_64-3.6/maskrcnn_benchmark/_C.cpython-36m-x86_64-linux-gnu.so
running develop
running egg_info
writing maskrcnn_benchmark.egg-info/PKG-INFO
writing dependency_links to maskrcnn_benchmark.egg-info/dependency_links.txt
writing top-level names to maskrcnn_benchmark.egg-info/top_level.txt
reading manifest file 'maskrcnn_benchmark.egg-info/SOURCES.txt'
writing manifest file 'maskrcnn_benchmark.egg-info/SOURCES.txt'
running build_ext
copying build/lib.linux-x86_64-3.6/maskrcnn_benchmark/_C.cpython-36m-x86_64-linux-gnu.so -> maskrcnn_benchmark
Creating /home/gengfeng/anaconda3/envs/maskrcnn_benchmark/lib/python3.6/site-packages/maskrcnn-benchmark.egg-link (link to .)
Adding maskrcnn-benchmark 0.1 to easy-install.pth file

Installed /home/gengfeng/github/maskrcnn-benchmark
Processing dependencies for maskrcnn-benchmark==0.1
Finished processing dependencies for maskrcnn-benchmark==0.1
(maskrcnn_benchmark) gengfeng@ai-work-4:~/github/maskrcnn-benchmark/demo$ python
Python 3.6.2 |Continuum Analytics, Inc.| (default, Jul 20 2017, 13:51:32) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import maskrcnn_benchmark
>>>

Thank you great man!

@fmassa
Copy link
Contributor

fmassa commented Oct 26, 2018

Awesome!

@gf19880710 could you summarize again what were the steps that you did so that it worked correctly?
I'll update the README with the information.

@gf19880710
Copy link
Author

@fmassa
OK, it's my pleasure. I will just follow the correct steps I did for this issue based on Ubuntu18.04.

  • Follow the INSTALL.md, I faced this error: command '/usr/local/cuda/bin/nvcc' failed with exit status 1 issue

  • @fmassa mentioned to check the gcc version in my PC, my gcc version was gcc-6.4.0, this may cause the issue occurs, so I decided to change my gcc version.

  • Compile the gcc-5.4.0.tar.gz is complex and failed for me again, so I tried sudo apt-get install gcc-5 to install gcc-5.

  • After installation of gcc-5, use gcc --version to check if gcc version is still 6.4.0(normally not, we should do the softlink for both gcc and g++).

  • Do the softlink in both /usr/bin and /usr/local/cuda-9.0/bin directory:

cd /usr/bin
sudo rm gcc
sudo rm g++
sudo ln -s /usr/bin/gcc-5 gcc
sudo ln -s /usr/bin/g++-5 g++

cd /usr/local/cuda-9.0/bin
sudo rm gcc
sudo rm g++
sudo ln -s /usr/bin/gcc-5 gcc
sudo ln -s /usr/bin/g++-5 g++
  • Check if gcc and g++ version changed to 5.5.0 accordingly.

  • Add -DCUDA_HOST_COMPILER=/usr/bin/gcc-5 to setup.py file

    if torch.cuda.is_available() and CUDA_HOME is not None:
        extension = CUDAExtension
        sources += source_cuda
        define_macros += [("WITH_CUDA", None)]
        extra_compile_args["nvcc"] = [
            "-DCUDA_HAS_FP16=1",
            "-D__CUDA_NO_HALF_OPERATORS__",
            "-D__CUDA_NO_HALF_CONVERSIONS__",
            "-D__CUDA_NO_HALF2_OPERATORS__",
            "-DCUDA_HOST_COMPILER=/usr/bin/gcc-5"
        ]
  • Do rm -rd build/

  • Compile again with:

CUDAHOSTCXX=/usr/bin/gcc-5 python setup.py build develop
  • Go to ipython or python environment to check if module import successful:
(maskrcnn_benchmark) gengfeng@ai-work-4:~/github/maskrcnn-benchmark/demo$ python
Python 3.6.2 |Continuum Analytics, Inc.| (default, Jul 20 2017, 13:51:32) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import maskrcnn_benchmark
>>>
  • Finished.

Wish these steps are useful !

@steve-goley
Copy link

I ran into the same issue. I switched from CUDA 9.0 to CUDA 9.2 and then it built fine. I tried this on Debian testing and Ubuntu 18.04.

@gd2016229035
Copy link

I am using CUDA 8.0 and ubuntu14.04......I have tried to change the version of gcc from 4.8 to 5.4, But the CUDA 8 can not support this gcc version... Maybe it can hardly been compiled in this environment? T T

@fmassa
Copy link
Contributor

fmassa commented Oct 29, 2018

@gd2016229035 the solution in your case is to compile PyTorch from source.

In this case, you can use CUDA 8 with gcc 4.8, and you can compile the maskrcnn-benchmark library using gcc 4.8 with CUDA 8.0

Let me know if you have more questions. Instructions on how to compile PyTorch from source can be found in https://github.com/pytorch/pytorch. Don't forget to first uninstall PyTorch prior to compiling it.

@hadim
Copy link
Contributor

hadim commented Oct 29, 2018

Compilation works fine by symlinking gcc and g++ to version 5 (on Ubuntu 18.10).

Would you be willing to provide a conda-forge package for maskrcnn-benchmark? That would make Python env installation easier and so lower the entry barrier for potential users and make workflow easier and more reproducible.

I can help if needed.

@Curry1201
Copy link

Hi @fmassa ,I have encountered a problem similar to this, my problem can be seen here:open-mmlab/mmdetection#125
I follow the steps of solution. When going to this step (Add -DCUDA_HOST_COMPILER=/usr/bin/gcc-5 to setup.py file), I don't know how to add commands to the setup.py file.
Here is my setup.py file.

import os
import subprocess
import time
from setuptools import find_packages, setup

def readme():
with open('README.md') as f:
content = f.read()
return content

MAJOR = 0
MINOR = 5
PATCH = 2
SUFFIX = ''
SHORT_VERSION = '{}.{}.{}{}'.format(MAJOR, MINOR, PATCH, SUFFIX)

version_file = 'mmdet/version.py'

def get_git_hash():

def _minimal_ext_cmd(cmd):
    # construct minimal environment
    env = {}
    for k in ['SYSTEMROOT', 'PATH', 'HOME']:
        v = os.environ.get(k)
        if v is not None:
            env[k] = v
    # LANGUAGE is used on win32
    env['LANGUAGE'] = 'C'
    env['LANG'] = 'C'
    env['LC_ALL'] = 'C'
    out = subprocess.Popen(
        cmd, stdout=subprocess.PIPE, env=env).communicate()[0]
    return out

try:
    out = _minimal_ext_cmd(['git', 'rev-parse', 'HEAD'])
    sha = out.strip().decode('ascii')
except OSError:
    sha = 'unknown'

return sha

def get_hash():
if os.path.exists('.git'):
sha = get_git_hash()[:7]
elif os.path.exists(version_file):
try:
from mmdet.version import version
sha = version.split('+')[-1]
except ImportError:
raise ImportError('Unable to get git version')
else:
sha = 'unknown'

return sha

def write_version_py():
content = """# GENERATED VERSION FILE

TIME: {}

version = '{}'
short_version = '{}'
"""
sha = get_hash()
VERSION = SHORT_VERSION + '+' + sha

with open(version_file, 'w') as f:
    f.write(content.format(time.asctime(), VERSION, SHORT_VERSION))

def get_version():
with open(version_file, 'r') as f:
exec(compile(f.read(), version_file, 'exec'))
return locals()['version']

if name == 'main':
write_version_py()
setup(
name='mmdet',
version=get_version(),
description='Open MMLab Detection Toolbox',
long_description=readme(),
keywords='computer vision, object detection',
url='https://github.com/open-mmlab/mmdetection',
packages=find_packages(),
package_data={'mmdet.ops': ['/.so']},
classifiers=[
'Development Status :: 4 - Beta',
'License :: OSI Approved :: GNU General Public License v3 (GPLv3)',
'Operating System :: OS Independent',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
],
license='GPLv3',
setup_requires=['pytest-runner'],
tests_require=['pytest'],
install_requires=[
'mmcv', 'numpy', 'matplotlib', 'six', 'terminaltables',
'pycocotools'
],
zip_safe=False)

@fmassa
Copy link
Contributor

fmassa commented Nov 27, 2018

@Curry1201 are you having problems compiling maskrcnn-benchmark or mmdetection?
If it's mmdetection, it's better to wait until mmdetection team replies, as I don't have any experience with their codebase.

@Curry1201
Copy link

Hi @gf19880710 , I have encountered a problem similar to yours. The main error description is {unable to execute ':/usr/local/cuda-9.0:/usr/local/cuda-9.0/bin/nvcc': No such file or directory. error: command ':/usr/local/cuda-9.0:/usr/local/cuda-9.0/bin/nvcc' failed with exit status 1}. The specific compilation process is as follows, can you give me some advice?

ubuntu server 18.04
cuda 9.0.176
cudnn 7.0.5
pytorch 0.4.1
gcc 5.5.0 g++ 5.5.0
python 3.6

power@ubuntu:~/mmdetection$ PYTHON=python3 ./compile.sh
Building roi align op...
running build_ext
building 'roi_align_cuda' extension
creating build
creating build/temp.linux-x86_64-3.6
creating build/temp.linux-x86_64-3.6/src
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/power/.local/lib/python3.6/site-packages/torch/lib/include -I/home/power/.local/lib/python3.6/site-packages/torch/lib/include/TH -I/home/power/.local/lib/python3.6/site-packages/torch/lib/include/THC -I:/usr/local/cuda-9.0:/usr/local/cuda-9.0/include -I/usr/include/python3.6m -c src/roi_align_cuda.cpp -o build/temp.linux-x86_64-3.6/src/roi_align_cuda.o -DTORCH_EXTENSION_NAME=roi_align_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
:/usr/local/cuda-9.0:/usr/local/cuda-9.0/bin/nvcc -I/home/power/.local/lib/python3.6/site-packages/torch/lib/include -I/home/power/.local/lib/python3.6/site-packages/torch/lib/include/TH -I/home/power/.local/lib/python3.6/site-packages/torch/lib/include/THC -I:/usr/local/cuda-9.0:/usr/local/cuda-9.0/include -I/usr/include/python3.6m -c src/roi_align_kernel.cu -o build/temp.linux-x86_64-3.6/src/roi_align_kernel.o -DTORCH_EXTENSION_NAME=roi_align_cuda -D_GLIBCXX_USE_CXX11_ABI=0 --compiler-options '-fPIC' -std=c++11
unable to execute ':/usr/local/cuda-9.0:/usr/local/cuda-9.0/bin/nvcc': No such file or directory
error: command ':/usr/local/cuda-9.0:/usr/local/cuda-9.0/bin/nvcc' failed with exit status 1
Building roi pool op...
running build_ext
building 'roi_pool_cuda' extension
creating build
creating build/temp.linux-x86_64-3.6
creating build/temp.linux-x86_64-3.6/src
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/power/.local/lib/python3.6/site-packages/torch/lib/include -I/home/power/.local/lib/python3.6/site-packages/torch/lib/include/TH -I/home/power/.local/lib/python3.6/site-packages/torch/lib/include/THC -I:/usr/local/cuda-9.0:/usr/local/cuda-9.0/include -I/usr/include/python3.6m -c src/roi_pool_cuda.cpp -o build/temp.linux-x86_64-3.6/src/roi_pool_cuda.o -DTORCH_EXTENSION_NAME=roi_pool_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
:/usr/local/cuda-9.0:/usr/local/cuda-9.0/bin/nvcc -I/home/power/.local/lib/python3.6/site-packages/torch/lib/include -I/home/power/.local/lib/python3.6/site-packages/torch/lib/include/TH -I/home/power/.local/lib/python3.6/site-packages/torch/lib/include/THC -I:/usr/local/cuda-9.0:/usr/local/cuda-9.0/include -I/usr/include/python3.6m -c src/roi_pool_kernel.cu -o build/temp.linux-x86_64-3.6/src/roi_pool_kernel.o -DTORCH_EXTENSION_NAME=roi_pool_cuda -D_GLIBCXX_USE_CXX11_ABI=0 --compiler-options '-fPIC' -std=c++11
unable to execute ':/usr/local/cuda-9.0:/usr/local/cuda-9.0/bin/nvcc': No such file or directory
error: command ':/usr/local/cuda-9.0:/usr/local/cuda-9.0/bin/nvcc' failed with exit status 1
Building nms op...
rm -f .so
echo "Compiling nms kernels..."
Compiling nms kernels...
python3 setup.py build_ext --inplace
running build_ext
building 'cpu_nms' extension
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/power/.local/lib/python3.6/site-packages/numpy/core/include -I/usr/include/python3.6m -c cpu_nms.cpp -o build/temp.linux-x86_64-3.6/cpu_nms.o -Wno-unused-function -Wno-write-strings
In file included from /home/power/.local/lib/python3.6/site-packages/numpy/core/include/numpy/ndarraytypes.h:1821:0,
from /home/power/.local/lib/python3.6/site-packages/numpy/core/include/numpy/ndarrayobject.h:18,
from /home/power/.local/lib/python3.6/site-packages/numpy/core/include/numpy/arrayobject.h:4,
from cpu_nms.cpp:658:
/home/power/.local/lib/python3.6/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: #warning "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp]
#warning "Using deprecated NumPy API, disable it by "
^~~~~~~
x86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.6/cpu_nms.o -o /home/power/mmdetection/mmdet/ops/nms/cpu_nms.cpython-36m-x86_64-linux-gnu.so
building 'cpu_soft_nms' extension
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/power/.local/lib/python3.6/site-packages/numpy/core/include -I/usr/include/python3.6m -c cpu_soft_nms.cpp -o build/temp.linux-x86_64-3.6/cpu_soft_nms.o -Wno-unused-function -Wno-write-strings
In file included from /home/power/.local/lib/python3.6/site-packages/numpy/core/include/numpy/ndarraytypes.h:1821:0,
from /home/power/.local/lib/python3.6/site-packages/numpy/core/include/numpy/ndarrayobject.h:18,
from /home/power/.local/lib/python3.6/site-packages/numpy/core/include/numpy/arrayobject.h:4,
from cpu_soft_nms.cpp:658:
/home/power/.local/lib/python3.6/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: #warning "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp]
#warning "Using deprecated NumPy API, disable it by "
^~~~~~~
cpu_soft_nms.cpp: In function ‘PyObject
__pyx_pf_12cpu_soft_nms_cpu_soft_nms(PyObject*, PyArrayObject*, float, float, float, unsigned int)’:
cpu_soft_nms.cpp:2491:34: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
__pyx_t_10 = ((__pyx_v_pos < __pyx_v_N) != 0);
~~~~~~~~~~~~^~~~~~~~~~~
cpu_soft_nms.cpp:3002:34: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
__pyx_t_10 = ((__pyx_v_pos < __pyx_v_N) != 0);
~~~~~~~~~~~~^~~~~~~~~~~
x86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.6/cpu_soft_nms.o -o /home/power/mmdetection/mmdet/ops/nms/cpu_soft_nms.cpython-36m-x86_64-linux-gnu.so
building 'gpu_nms' extension
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/power/.local/lib/python3.6/site-packages/numpy/core/include -I/usr/include/python3.6m -c gpu_nms.cpp -o build/temp.linux-x86_64-3.6/gpu_nms.o -Wno-unused-function -Wno-write-strings
In file included from /home/power/.local/lib/python3.6/site-packages/numpy/core/include/numpy/ndarraytypes.h:1821:0,
from /home/power/.local/lib/python3.6/site-packages/numpy/core/include/numpy/ndarrayobject.h:18,
from /home/power/.local/lib/python3.6/site-packages/numpy/core/include/numpy/arrayobject.h:4,
from gpu_nms.cpp:660:
/home/power/.local/lib/python3.6/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: #warning "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp]
#warning "Using deprecated NumPy API, disable it by "
^~~~~~~
nvcc -I/home/power/.local/lib/python3.6/site-packages/numpy/core/include -I/usr/include/python3.6m -c nms_kernel.cu -o build/temp.linux-x86_64-3.6/nms_kernel.o -c --compiler-options -fPIC
x86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.6/gpu_nms.o build/temp.linux-x86_64-3.6/nms_kernel.o -o /home/power/mmdetection/mmdet/ops/nms/gpu_nms.cpython-36m-x86_64-linux-gnu.so

@fmassa
Copy link
Contributor

fmassa commented Nov 28, 2018

@Curry1201 your question concern mmdetection. Please redirect such questions tommdetection repo, as I believe this is the best way of getting help for mmdetection.

@rsbowen
Copy link

rsbowen commented Jan 21, 2019

I had this issue; instead of manually changing the /usr/bin symlinks, I got it compile (after I had gcc-5) using update-alternatives:

sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-5 10
sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-5 10

@AllenAnthony
Copy link

I install maskrcnn-benchmark 0.1 successfully
maskrcnn-benchmark 0.1 is already the active version in easy-install.pth
and I can
import maskrcnn_benchmark
while when I
from maskrcnn_benchmark import _C
I was told
maskrcnn/maskrcnn_benchmark/_C.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN2at19UndefinedTensorImpl10_singletonE
how can I fix it?

@fmassa
Copy link
Contributor

fmassa commented Feb 6, 2019

@AllenAnthony you probably compiled maskrcnn-benchmark with a different version of PyTorch than the one you are using to run it. Note that maskrcnn-benchmark requires PyTorch 1.0 or later.
Your problem is similar to #149

@mathpluscode
Copy link

@gf19880710 Thx so much for your resume.

I had the same error when executing pip install --upgrade torch-scatter on Ubuntu 18.04 with Cuda 9.0, gcc 6. Some people suggest to use cuda 9.2 to solve the problem.

I finally solved the problem via switching to gcc/g++ 5.

@james77777778
Copy link

Somehow I encountered the similar problem.
error: command '/usr/bin/nvcc' failed with exit status 1
It was not the same as 'error: command /usr/local/cuda/bin/nvcc' failed with exit status 1 that lacked with local/cuda.
And I finally worked through it by add a line to my .bashrc
export CUDA_HOME=/usr/local/cuda

Probably I missed something when I installed cuda...

@ChristofHenkel
Copy link

I ran into the same issue. I switched from CUDA 9.0 to CUDA 9.2 and then it built fine. I tried this on Debian testing and Ubuntu 18.04.

that sounds easier than it is

@tsvetiko
Copy link

tsvetiko commented Dec 30, 2019

Even after following the steps in the most liked comment, I got the gcc: error trying to exec 'cc1plus': execvp: no such file or directory error again. The fix was to do the following:

sudo apt-get install --reinstall g++-5

Check this answer in StackOverflow for more details.

@XYudong
Copy link

XYudong commented Mar 2, 2020

Just in setup.py, add this line "-ccbin=/usr/local/bin/gcc" in extra_compile_args["nvcc"] = [] with your actual path to gcc 5.4.
Then, python setup.py build develop
At least, this works for me.

@Kins1ley
Copy link

@AllenAnthony you probably compiled maskrcnn-benchmark with a different version of PyTorch than the one you are using to run it. Note that maskrcnn-benchmark requires PyTorch 1.0 or later.
Your problem is similar to #149

Hi, my cuda is 10.0.130 and my gcc version is 6.5.0, I think it's very suitable but I met the same problem...

@vivva
Copy link

vivva commented Dec 14, 2020

I am also encountered this when I install alphapose.Can you help me? Ubuntu 20.4 cuda 10.1 gcc g++ 7.5.0
(alphapose) win@win-Blade-15-Base-Model-Early-2020-RZ09-0328:~/PycharmProjects/AlphaPose$ CUDAHOSTCXX=/usr/bin/gcc-7 python setup.py build develop running build running build_py creating build creating build/lib.linux-x86_64-3.6 creating build/lib.linux-x86_64-3.6/trackers copying trackers/__init__.py -> build/lib.linux-x86_64-3.6/trackers copying trackers/tracker_cfg.py -> build/lib.linux-x86_64-3.6/trackers copying trackers/tracker_api.py -> build/lib.linux-x86_64-3.6/trackers creating build/lib.linux-x86_64-3.6/alphapose copying alphapose/opt.py -> build/lib.linux-x86_64-3.6/alphapose copying alphapose/__init__.py -> build/lib.linux-x86_64-3.6/alphapose copying alphapose/version.py -> build/lib.linux-x86_64-3.6/alphapose creating build/lib.linux-x86_64-3.6/trackers/ReidModels copying trackers/ReidModels/bn_linear.py -> build/lib.linux-x86_64-3.6/trackers/ReidModels copying trackers/ReidModels/ResBnLin.py -> build/lib.linux-x86_64-3.6/trackers/ReidModels copying trackers/ReidModels/osnet_ain.py -> build/lib.linux-x86_64-3.6/trackers/ReidModels copying trackers/ReidModels/__init__.py -> build/lib.linux-x86_64-3.6/trackers/ReidModels copying trackers/ReidModels/ResNet.py -> build/lib.linux-x86_64-3.6/trackers/ReidModels copying trackers/ReidModels/net_utils.py -> build/lib.linux-x86_64-3.6/trackers/ReidModels copying trackers/ReidModels/osnet.py -> build/lib.linux-x86_64-3.6/trackers/ReidModels copying trackers/ReidModels/resnet_fc.py -> build/lib.linux-x86_64-3.6/trackers/ReidModels creating build/lib.linux-x86_64-3.6/trackers/tracking copying trackers/tracking/__init__.py -> build/lib.linux-x86_64-3.6/trackers/tracking copying trackers/tracking/matching.py -> build/lib.linux-x86_64-3.6/trackers/tracking copying trackers/tracking/basetrack.py -> build/lib.linux-x86_64-3.6/trackers/tracking creating build/lib.linux-x86_64-3.6/trackers/ReidModels/classification copying trackers/ReidModels/classification/rfcn_cls.py -> build/lib.linux-x86_64-3.6/trackers/ReidModels/classification copying trackers/ReidModels/classification/__init__.py -> build/lib.linux-x86_64-3.6/trackers/ReidModels/classification copying trackers/ReidModels/classification/classifier.py -> build/lib.linux-x86_64-3.6/trackers/ReidModels/classification creating build/lib.linux-x86_64-3.6/trackers/ReidModels/backbone copying trackers/ReidModels/backbone/sqeezenet.py -> build/lib.linux-x86_64-3.6/trackers/ReidModels/backbone copying trackers/ReidModels/backbone/lrn.py -> build/lib.linux-x86_64-3.6/trackers/ReidModels/backbone copying trackers/ReidModels/backbone/googlenet.py -> build/lib.linux-x86_64-3.6/trackers/ReidModels/backbone copying trackers/ReidModels/backbone/__init__.py -> build/lib.linux-x86_64-3.6/trackers/ReidModels/backbone creating build/lib.linux-x86_64-3.6/trackers/ReidModels/reid copying trackers/ReidModels/reid/__init__.py -> build/lib.linux-x86_64-3.6/trackers/ReidModels/reid copying trackers/ReidModels/reid/image_part_aligned.py -> build/lib.linux-x86_64-3.6/trackers/ReidModels/reid creating build/lib.linux-x86_64-3.6/trackers/ReidModels/psroi_pooling copying trackers/ReidModels/psroi_pooling/build.py -> build/lib.linux-x86_64-3.6/trackers/ReidModels/psroi_pooling copying trackers/ReidModels/psroi_pooling/__init__.py -> build/lib.linux-x86_64-3.6/trackers/ReidModels/psroi_pooling creating build/lib.linux-x86_64-3.6/trackers/ReidModels/psroi_pooling/modules copying trackers/ReidModels/psroi_pooling/modules/__init__.py -> build/lib.linux-x86_64-3.6/trackers/ReidModels/psroi_pooling/modules copying trackers/ReidModels/psroi_pooling/modules/psroi_pool.py -> build/lib.linux-x86_64-3.6/trackers/ReidModels/psroi_pooling/modules creating build/lib.linux-x86_64-3.6/trackers/ReidModels/psroi_pooling/_ext copying trackers/ReidModels/psroi_pooling/_ext/__init__.py -> build/lib.linux-x86_64-3.6/trackers/ReidModels/psroi_pooling/_ext creating build/lib.linux-x86_64-3.6/trackers/ReidModels/psroi_pooling/functions copying trackers/ReidModels/psroi_pooling/functions/__init__.py -> build/lib.linux-x86_64-3.6/trackers/ReidModels/psroi_pooling/functions copying trackers/ReidModels/psroi_pooling/functions/psroi_pooling.py -> build/lib.linux-x86_64-3.6/trackers/ReidModels/psroi_pooling/functions creating build/lib.linux-x86_64-3.6/trackers/ReidModels/psroi_pooling/_ext/psroi_pooling copying trackers/ReidModels/psroi_pooling/_ext/psroi_pooling/__init__.py -> build/lib.linux-x86_64-3.6/trackers/ReidModels/psroi_pooling/_ext/psroi_pooling creating build/lib.linux-x86_64-3.6/trackers/tracking/utils copying trackers/tracking/utils/kalman_filter.py -> build/lib.linux-x86_64-3.6/trackers/tracking/utils copying trackers/tracking/utils/timer.py -> build/lib.linux-x86_64-3.6/trackers/tracking/utils copying trackers/tracking/utils/utils.py -> build/lib.linux-x86_64-3.6/trackers/tracking/utils copying trackers/tracking/utils/__init__.py -> build/lib.linux-x86_64-3.6/trackers/tracking/utils copying trackers/tracking/utils/parse_config.py -> build/lib.linux-x86_64-3.6/trackers/tracking/utils copying trackers/tracking/utils/io.py -> build/lib.linux-x86_64-3.6/trackers/tracking/utils copying trackers/tracking/utils/nms.py -> build/lib.linux-x86_64-3.6/trackers/tracking/utils creating build/lib.linux-x86_64-3.6/alphapose/utils copying alphapose/utils/bbox.py -> build/lib.linux-x86_64-3.6/alphapose/utils copying alphapose/utils/env.py -> build/lib.linux-x86_64-3.6/alphapose/utils copying alphapose/utils/webcam_detector.py -> build/lib.linux-x86_64-3.6/alphapose/utils copying alphapose/utils/metrics.py -> build/lib.linux-x86_64-3.6/alphapose/utils copying alphapose/utils/__init__.py -> build/lib.linux-x86_64-3.6/alphapose/utils copying alphapose/utils/config.py -> build/lib.linux-x86_64-3.6/alphapose/utils copying alphapose/utils/logger.py -> build/lib.linux-x86_64-3.6/alphapose/utils copying alphapose/utils/transforms.py -> build/lib.linux-x86_64-3.6/alphapose/utils copying alphapose/utils/file_detector.py -> build/lib.linux-x86_64-3.6/alphapose/utils copying alphapose/utils/writer.py -> build/lib.linux-x86_64-3.6/alphapose/utils copying alphapose/utils/vis.py -> build/lib.linux-x86_64-3.6/alphapose/utils copying alphapose/utils/registry.py -> build/lib.linux-x86_64-3.6/alphapose/utils copying alphapose/utils/detector.py -> build/lib.linux-x86_64-3.6/alphapose/utils copying alphapose/utils/pPose_nms.py -> build/lib.linux-x86_64-3.6/alphapose/utils creating build/lib.linux-x86_64-3.6/alphapose/datasets copying alphapose/datasets/custom.py -> build/lib.linux-x86_64-3.6/alphapose/datasets copying alphapose/datasets/concat_dataset.py -> build/lib.linux-x86_64-3.6/alphapose/datasets copying alphapose/datasets/__init__.py -> build/lib.linux-x86_64-3.6/alphapose/datasets copying alphapose/datasets/coco_wholebody.py -> build/lib.linux-x86_64-3.6/alphapose/datasets copying alphapose/datasets/coco_wholebody_det.py -> build/lib.linux-x86_64-3.6/alphapose/datasets copying alphapose/datasets/halpe_136_det.py -> build/lib.linux-x86_64-3.6/alphapose/datasets copying alphapose/datasets/halpe_26.py -> build/lib.linux-x86_64-3.6/alphapose/datasets copying alphapose/datasets/mpii.py -> build/lib.linux-x86_64-3.6/alphapose/datasets copying alphapose/datasets/halpe_136.py -> build/lib.linux-x86_64-3.6/alphapose/datasets copying alphapose/datasets/mscoco.py -> build/lib.linux-x86_64-3.6/alphapose/datasets copying alphapose/datasets/halpe_26_det.py -> build/lib.linux-x86_64-3.6/alphapose/datasets copying alphapose/datasets/coco_det.py -> build/lib.linux-x86_64-3.6/alphapose/datasets creating build/lib.linux-x86_64-3.6/alphapose/models copying alphapose/models/fastpose_duc_dense.py -> build/lib.linux-x86_64-3.6/alphapose/models copying alphapose/models/fastpose.py -> build/lib.linux-x86_64-3.6/alphapose/models copying alphapose/models/criterion.py -> build/lib.linux-x86_64-3.6/alphapose/models copying alphapose/models/__init__.py -> build/lib.linux-x86_64-3.6/alphapose/models copying alphapose/models/simplepose.py -> build/lib.linux-x86_64-3.6/alphapose/models copying alphapose/models/fastpose_duc.py -> build/lib.linux-x86_64-3.6/alphapose/models copying alphapose/models/hardnet.py -> build/lib.linux-x86_64-3.6/alphapose/models copying alphapose/models/builder.py -> build/lib.linux-x86_64-3.6/alphapose/models copying alphapose/models/hrnet.py -> build/lib.linux-x86_64-3.6/alphapose/models creating build/lib.linux-x86_64-3.6/alphapose/utils/presets copying alphapose/utils/presets/__init__.py -> build/lib.linux-x86_64-3.6/alphapose/utils/presets copying alphapose/utils/presets/simple_transform.py -> build/lib.linux-x86_64-3.6/alphapose/utils/presets creating build/lib.linux-x86_64-3.6/alphapose/utils/roi_align copying alphapose/utils/roi_align/roi_align.py -> build/lib.linux-x86_64-3.6/alphapose/utils/roi_align copying alphapose/utils/roi_align/__init__.py -> build/lib.linux-x86_64-3.6/alphapose/utils/roi_align running build_ext building 'detector.nms.soft_nms_cpu' extension creating build/temp.linux-x86_64-3.6 creating build/temp.linux-x86_64-3.6/detector creating build/temp.linux-x86_64-3.6/detector/nms creating build/temp.linux-x86_64-3.6/detector/nms/src gcc -pthread -B /home/win/anaconda3/envs/alphapose/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/numpy/core/include -I/home/win/anaconda3/envs/alphapose/include/python3.6m -c detector/nms/src/soft_nms_cpu.cpp -o build/temp.linux-x86_64-3.6/detector/nms/src/soft_nms_cpu.o -Wno-unused-function -Wno-write-strings -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=soft_nms_cpu -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ In file included from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/numpy/core/include/numpy/ndarraytypes.h:1822:0, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/numpy/core/include/numpy/ndarrayobject.h:12, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/numpy/core/include/numpy/arrayobject.h:4, from detector/nms/src/soft_nms_cpu.cpp:638: /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:17:2: warning: #warning "Using deprecated NumPy API, disable it with " "#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp] #warning "Using deprecated NumPy API, disable it with " \ ^~~~~~~ creating build/lib.linux-x86_64-3.6/detector creating build/lib.linux-x86_64-3.6/detector/nms g++ -pthread -shared -B /home/win/anaconda3/envs/alphapose/compiler_compat -L/home/win/anaconda3/envs/alphapose/lib -Wl,-rpath=/home/win/anaconda3/envs/alphapose/lib -Wl,--no-as-needed -Wl,--sysroot=/ build/temp.linux-x86_64-3.6/detector/nms/src/soft_nms_cpu.o -o build/lib.linux-x86_64-3.6/detector/nms/soft_nms_cpu.cpython-36m-x86_64-linux-gnu.so building 'detector.nms.nms_cpu' extension gcc -pthread -B /home/win/anaconda3/envs/alphapose/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include -I/home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/TH -I/home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/win/anaconda3/envs/alphapose/include/python3.6m -c detector/nms/src/nms_cpu.cpp -o build/temp.linux-x86_64-3.6/detector/nms/src/nms_cpu.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=nms_cpu -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ detector/nms/src/nms_cpu.cpp: In instantiation of ‘at::Tensor nms_cpu_kernel(const at::Tensor&, float) [with scalar_t = double]’: detector/nms/src/nms_cpu.cpp:63:3: required from here detector/nms/src/nms_cpu.cpp:26:47: warning: ‘T* at::Tensor::data() const [with T = unsigned char]’ is deprecated [-Wdeprecated-declarations] auto suppressed = suppressed_t.data<uint8_t>(); ~~~~~~~~~~~~~~~~~~~~~~~~~~^~ In file included from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:11:0, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/Context.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/ATen.h:5, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/extension.h:4, from detector/nms/src/nms_cpu.cpp:2: /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:7: note: declared here T * data() const { ^~~~ detector/nms/src/nms_cpu.cpp:27:37: warning: ‘T* at::Tensor::data() const [with T = long int]’ is deprecated [-Wdeprecated-declarations] auto order = order_t.data<int64_t>(); ~~~~~~~~~~~~~~~~~~~~~^~ In file included from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:11:0, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/Context.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/ATen.h:5, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/extension.h:4, from detector/nms/src/nms_cpu.cpp:2: /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:7: note: declared here T * data() const { ^~~~ detector/nms/src/nms_cpu.cpp:28:8: warning: ‘T* at::Tensor::data() const [with T = double]’ is deprecated [-Wdeprecated-declarations] auto x1 = x1_t.data<scalar_t>(); ^~ In file included from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:11:0, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/Context.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/ATen.h:5, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/extension.h:4, from detector/nms/src/nms_cpu.cpp:2: /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:7: note: declared here T * data() const { ^~~~ detector/nms/src/nms_cpu.cpp:29:8: warning: ‘T* at::Tensor::data() const [with T = double]’ is deprecated [-Wdeprecated-declarations] auto y1 = y1_t.data<scalar_t>(); ^~ In file included from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:11:0, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/Context.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/ATen.h:5, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/extension.h:4, from detector/nms/src/nms_cpu.cpp:2: /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:7: note: declared here T * data() const { ^~~~ detector/nms/src/nms_cpu.cpp:30:8: warning: ‘T* at::Tensor::data() const [with T = double]’ is deprecated [-Wdeprecated-declarations] auto x2 = x2_t.data<scalar_t>(); ^~ In file included from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:11:0, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/Context.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/ATen.h:5, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/extension.h:4, from detector/nms/src/nms_cpu.cpp:2: /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:7: note: declared here T * data() const { ^~~~ detector/nms/src/nms_cpu.cpp:31:8: warning: ‘T* at::Tensor::data() const [with T = double]’ is deprecated [-Wdeprecated-declarations] auto y2 = y2_t.data<scalar_t>(); ^~ In file included from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:11:0, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/Context.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/ATen.h:5, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/extension.h:4, from detector/nms/src/nms_cpu.cpp:2: /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:7: note: declared here T * data() const { ^~~~ detector/nms/src/nms_cpu.cpp:32:8: warning: ‘T* at::Tensor::data() const [with T = double]’ is deprecated [-Wdeprecated-declarations] auto areas = areas_t.data<scalar_t>(); ^~~~~ In file included from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:11:0, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/Context.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/ATen.h:5, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/extension.h:4, from detector/nms/src/nms_cpu.cpp:2: /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:7: note: declared here T * data() const { ^~~~ detector/nms/src/nms_cpu.cpp: In instantiation of ‘at::Tensor nms_cpu_kernel(const at::Tensor&, float) [with scalar_t = float]’: detector/nms/src/nms_cpu.cpp:63:3: required from here detector/nms/src/nms_cpu.cpp:26:47: warning: ‘T* at::Tensor::data() const [with T = unsigned char]’ is deprecated [-Wdeprecated-declarations] auto suppressed = suppressed_t.data<uint8_t>(); ~~~~~~~~~~~~~~~~~~~~~~~~~~^~ In file included from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:11:0, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/Context.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/ATen.h:5, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/extension.h:4, from detector/nms/src/nms_cpu.cpp:2: /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:7: note: declared here T * data() const { ^~~~ detector/nms/src/nms_cpu.cpp:27:37: warning: ‘T* at::Tensor::data() const [with T = long int]’ is deprecated [-Wdeprecated-declarations] auto order = order_t.data<int64_t>(); ~~~~~~~~~~~~~~~~~~~~~^~ In file included from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:11:0, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/Context.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/ATen.h:5, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/extension.h:4, from detector/nms/src/nms_cpu.cpp:2: /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:7: note: declared here T * data() const { ^~~~ detector/nms/src/nms_cpu.cpp:28:8: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated [-Wdeprecated-declarations] auto x1 = x1_t.data<scalar_t>(); ^~ In file included from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:11:0, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/Context.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/ATen.h:5, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/extension.h:4, from detector/nms/src/nms_cpu.cpp:2: /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:7: note: declared here T * data() const { ^~~~ detector/nms/src/nms_cpu.cpp:29:8: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated [-Wdeprecated-declarations] auto y1 = y1_t.data<scalar_t>(); ^~ In file included from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:11:0, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/Context.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/ATen.h:5, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/extension.h:4, from detector/nms/src/nms_cpu.cpp:2: /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:7: note: declared here T * data() const { ^~~~ detector/nms/src/nms_cpu.cpp:30:8: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated [-Wdeprecated-declarations] auto x2 = x2_t.data<scalar_t>(); ^~ In file included from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:11:0, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/Context.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/ATen.h:5, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/extension.h:4, from detector/nms/src/nms_cpu.cpp:2: /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:7: note: declared here T * data() const { ^~~~ detector/nms/src/nms_cpu.cpp:31:8: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated [-Wdeprecated-declarations] auto y2 = y2_t.data<scalar_t>(); ^~ In file included from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:11:0, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/Context.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/ATen.h:5, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/extension.h:4, from detector/nms/src/nms_cpu.cpp:2: /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:7: note: declared here T * data() const { ^~~~ detector/nms/src/nms_cpu.cpp:32:8: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated [-Wdeprecated-declarations] auto areas = areas_t.data<scalar_t>(); ^~~~~ In file included from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:11:0, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/Context.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/ATen.h:5, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/extension.h:4, from detector/nms/src/nms_cpu.cpp:2: /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:322:7: note: declared here T * data() const { ^~~~ g++ -pthread -shared -B /home/win/anaconda3/envs/alphapose/compiler_compat -L/home/win/anaconda3/envs/alphapose/lib -Wl,-rpath=/home/win/anaconda3/envs/alphapose/lib -Wl,--no-as-needed -Wl,--sysroot=/ build/temp.linux-x86_64-3.6/detector/nms/src/nms_cpu.o -L/usr/local/cuda/lib64 -lcudart -o build/lib.linux-x86_64-3.6/detector/nms/nms_cpu.cpython-36m-x86_64-linux-gnu.so building 'detector.nms.nms_cuda' extension gcc -pthread -B /home/win/anaconda3/envs/alphapose/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include -I/home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/TH -I/home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/win/anaconda3/envs/alphapose/include/python3.6m -c detector/nms/src/nms_cuda.cpp -o build/temp.linux-x86_64-3.6/detector/nms/src/nms_cuda.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=nms_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ In file included from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/c10/core/Device.h:5:0, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/c10/core/Allocator.h:6, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/ATen.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/extension.h:4, from detector/nms/src/nms_cuda.cpp:2: detector/nms/src/nms_cuda.cpp: In function ‘at::Tensor nms(const at::Tensor&, float)’: /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/c10/util/Exception.h:355:20: warning: ‘void c10::detail::deprecated_AT_CHECK()’ is deprecated [-Wdeprecated-declarations] ::c10::detail::deprecated_AT_CHECK(); \ ^ detector/nms/src/nms_cuda.cpp:4:23: note: in expansion of macro ‘AT_CHECK’ #define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x, " must be a CUDAtensor ") ^~~~~~~~ detector/nms/src/nms_cuda.cpp:9:3: note: in expansion of macro ‘CHECK_CUDA’ CHECK_CUDA(dets); ^ In file included from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/c10/core/Device.h:5:0, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/c10/core/Allocator.h:6, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/ATen.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/extension.h:4, from detector/nms/src/nms_cuda.cpp:2: /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/c10/util/Exception.h:330:13: note: declared here inline void deprecated_AT_CHECK() {} ^~~~~~~~~~~~~~~~~~~ In file included from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/c10/core/Device.h:5:0, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/c10/core/Allocator.h:6, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/ATen.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/extension.h:4, from detector/nms/src/nms_cuda.cpp:2: /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/c10/util/Exception.h:355:40: warning: ‘void c10::detail::deprecated_AT_CHECK()’ is deprecated [-Wdeprecated-declarations] ::c10::detail::deprecated_AT_CHECK(); \ ^ detector/nms/src/nms_cuda.cpp:4:23: note: in expansion of macro ‘AT_CHECK’ #define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x, " must be a CUDAtensor ") ^~~~~~~~ detector/nms/src/nms_cuda.cpp:9:3: note: in expansion of macro ‘CHECK_CUDA’ CHECK_CUDA(dets); ^ In file included from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/c10/core/Device.h:5:0, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/c10/core/Allocator.h:6, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/ATen/ATen.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/extension.h:4, from detector/nms/src/nms_cuda.cpp:2: /home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/c10/util/Exception.h:330:13: note: declared here inline void deprecated_AT_CHECK() {} ^~~~~~~~~~~~~~~~~~~ /usr/local/cuda/bin/nvcc -I/home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include -I/home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/TH -I/home/win/anaconda3/envs/alphapose/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/win/anaconda3/envs/alphapose/include/python3.6m -c detector/nms/src/nms_kernel.cu -o build/temp.linux-x86_64-3.6/detector/nms/src/nms_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=nms_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=sm_75 -std=c++11 unable to execute '/usr/local/cuda/bin/nvcc': No such file or directory error: command '/usr/local/cuda/bin/nvcc' failed with exit status 1

@mcvarer
Copy link

mcvarer commented Jul 7, 2021

This is very weird. I've pulled the same version of the pytorch build as you, I have the same nvcc version but compilation went fine. I'm investigating a bit further, but I wonder if it might be related to facebookarchive/caffe2#1898

Can you try doing something like

CUDA_HOST_COMPILER=/usr/bin/gcc-5 python setup.py build develop

cuda 11.2

CUDA_HOST_COMPILER=/usr/bin/gcc-10 python setup.py build develop

and add below code

#ifndef AT_CHECK
#define AT_CHECK TORCH_CHECK 
#endif

maskrcnn_benchmark/csrc/cuda/deform_conv_cuda.cu
maskrcnn_benchmark/csrc/cuda/deform_pool_cuda.cu

@bamboopu
Copy link

"-DCUDA_HOST_COMPILER=/usr/bin/gcc-5"

I am so sad, it seems do not work in CUDA11 with g++ 5.4.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests