-
Notifications
You must be signed in to change notification settings - Fork 9
load 'tvm_runtime_op.so' error #3
Comments
It seems to be the problem of you dynamic library file. How do you build the |
Doesn’t it look like some ABI issues? |
@tobegit3hub g++ -g -std=c++11 -shared tvm_runtime_kernels.cc tvm_runtime_ops.cc -o tvm_runtime_op.so -fPIC ${TF_CFLAGS[@]} ${TF_LFLAGS[@]} -O2 -I${TVM_HOME}/include -I${TVM_HOME}/3rdparty/dmlc-core/include -I${TVM_HOME}/3rdparty/dlpack/include -I/usr/local/cuda/include -ltvm_runtime -L${TVM_HOME}/build -ldl -lpthread Some warning like this: |
@zouzhene |
@zhuochenKIDD I use pre-built tensorflow ,version is tensorflow-gpu==1.15.0 |
I met the same problem, with tensorflow 2.0.0
|
Seems the linking error, which |
On these issues, could you try “nm -g tvm_runtime_op.so” to see if similar symbols exist? If so, this would be indication of compiler ABI incompatibility |
I removed the
|
Rebuilding tvm with |
@VoVAllen Perhaps you need to build llvm from source, using -D_GLIBCXX_USE_CXX11_ABI=0 : apache/tvm#521 (comment) I guess @tobegit3hub uses self-built TensorFlow with high version gcc(>=5), so ABI incompatibility issue did not occur. I you use pre-built TensorFlow from pip install, tf is built by gcc 4.8.5, check your env's compiler version for ABI incompatibility. |
It's annoying. You have to rebuilt tensorflow or llvm/tvm to solve this. Both of them took long time to do so... |
TVM can be viewed as two parts:
Especially, TVM runtime can be run in header-only mode, which means it could be potentially build together with TensorFlow to avoid any ABI issues. In this case, I would say that it could potentially avoid having to rebuild LLVM. |
Sorry for the late response. We have updated the code for the final Python API and the build script was updated to avoid conflict of re-loading |
@tobegit3hub I have a similar issue here. In order to build correctly, I have upgrade cmake to 3.16, build with c++14 rather than c++11, and add And when I am running the addone example, I got this error |
I am getting same error: Have you, by any chance, found a fix already? Thanks! |
Yes, I rewrite the By the way, directly compile |
Thanks for the replay Siyuan. If you use target_link_libraries, does that mean you are not getting link flags from tf.sysconfig.get_link_flags? It seems a bit fragile when TF changes it link flags in the future. Meanwhile, I did some investigation and found the issue is that target_link_options place -ltensorflow_framework in front of compile targets in g++ command, this effectively discards all symbols in tensorflow_framework.so*. I fixed the issue by building tvm_dso_op.so using Bazel, which handles library dependency ordering correctly. |
@gmagogsfm |
Thanks all for the usage. Now the patch is already merged into TVM upstream and you can build tvm with The issue will be closed and only maintain in tvm codebase. |
when run
tvm_runtime_ops = load_library.load_op_library('tvm_runtime_op.so')
get
tensorflow.python.framework.errors_impl.NotFoundError: tvm_runtime_op.so: undefined symbol: _ZN3tvm7runtime6Module12LoadFromFileERKSsS3_
The text was updated successfully, but these errors were encountered: