-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
no such target '@org_tensorflow//third_party/gpus/crosstool:crosstool' #318
Comments
I am facing same problem |
I was able to make it compile. Here is an script to do it so: https://gist.github.com/jorgemf/0f2025a45e1568663f4c20551a5881f1 You only need to modify the variables and the exports with the values you want and everything works. It works because:
|
@jorgemf works for me |
@jorgemf I got successful compile with your script, but it seems doesn't have support for the GPU. after added
Steps to reproduce: git clone --recurse-submodules https://github.com/tensorflow/serving
cd serving
export TF_NEED_CUDA=1
export TF_NEED_GCP=1
export TF_NEED_JEMALLOC=1
export TF_NEED_HDFS=0
export TF_NEED_OPENCL=0
export TF_ENABLE_XLA=0
export TF_CUDA_VERSION=8.0
export TF_CUDNN_VERSION=5
export TF_CUDA_COMPUTE_CAPABILITIES="3.5,5.2,6.1"
export CUDA_TOOLKIT_PATH="/usr/local/cuda"
export CUDNN_INSTALL_PATH="/usr/local/cuda"
export GCC_HOST_COMPILER_PATH="/usr/bin/gcc"
export PYTHON_BIN_PATH="/home/opt/anaconda/envs/py2/bin/python"
export CC_OPT_FLAGS="-march=native"
export PYTHON_LIB_PATH="/home/opt/anaconda/envs/py2/lib/python2.7/site-packages"
cd tensorflow
./configure
cd ..
# Ref: https://github.com/tensorflow/serving/issues/318#issuecomment-283498443
sed -i.bak 's/@org_tensorflow\/\/third_party\/gpus\/crosstool/@local_config_cuda\/\/crosstool:toolchain/g' tools/bazel.rc
bazel build -c opt --config=cuda --spawn_strategy=standalone //tensorflow_serving/model_servers:tensorflow_model_server
# add `with tf.device("/gpu")` to `mnist_saved_model.py`
sed -i '138s/.*/with tf.device("\/gpu"):/' tensorflow_serving/example/mnist_saved_model.py
sed -i '139s/.*/ if __name__ == "__main__":/' tensorflow_serving/example/mnist_saved_model.py
sed -i '140s/.*/ tf.app.run()/' tensorflow_serving/example/mnist_saved_model.py
bazel build //tensorflow_serving/example:mnist_saved_model
bazel-bin/tensorflow_serving/example/mnist_saved_model /tmp/mnist_model |
|
Thanks @jorgemf for the compile script. It worked for me too, and it was a lot simpler than my solution at #349 :). However, it doesn't seem to be using the GPU for me either. My saved model does not explicitly request GPU allocation, but it should use the GPU by default, if available. And, as you say, tf-serving should allocate most GPU RAM on launch, and it clearly doesn't. |
@vtablan have you set I have just tested and it doesn't compile, I am not sure whether is my scripts fault or due to some internal change. Anyway I cannot review the scrip for every commit. Here is the error:
Try a version of 15 days ago, the same I used to compile it. It should work. In my experience TensorFlow Serving is in development and broken a lot of times. |
Interesting... I had cloned my repository just before posting my previous comment, and it compiled fine for me, with your script. I had edited your script to hardcode the location of the repository, and the use of pyhton3, and associated python path. Other than that, I have made no changes to your script. What I was saying above is that the I'll post an update if that's successful. |
@jorgemf Success - I now have a tf-model_server that does indeed use the GPU. To get there I used:
Thanks again for providing the script! |
Master compiles now for me with GPU support. Closing the issue |
I wonder if the script is the new version... below is the error: @jorgemf
|
@sailor88128 It might be you are using another shell. It works for linux only |
I just use it in nvidia-docker ubuntu 16. Oh, I use cudnn 6.0.21, but I have change 7.0 to 6.0 in the script, is that the problem? |
@sailor88128 Yes it is. The script is very specific because it doesn't work. You have to use the versions in the script and the correct bazel version that I do not remember now. Otherwise it wont compile |
Oh, got it. Thanks a lot. |
@sailor88128, no. I used my local machine. You can try the official images of tensorflow: https://hub.docker.com/r/tensorflow/tensorflow/tags/ |
I am trying to compile tensorflow_model_server from master.
Error:
Steps to reproduce:
As a side note when I try to compile tensorflow_model_server from an external project it works but doesn't have support for the GPU.
The solutions of #225 don't work
EDITED: finally I made this script in order to compile it with CUDA support: https://gist.github.com/jorgemf/0f2025a45e1568663f4c20551a5881f1
The text was updated successfully, but these errors were encountered: