We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
系统环境:
Linux ubuntu 5.0.0-23-generic #24~18.04.1-Ubuntu SMP Mon Jul 29 16:12:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
gcc版本:
gcc version 7.4.0 (Ubuntu 7.4.0-1ubuntu1~18.04.1)
预测库来源: https://www.paddlepaddle.org.cn/documentation/docs/zh/1.5/advanced_usage/deploy/inference/build_and_install_lib_cn.html 从上面的链接里面下载的cpu_avx_mkl的1.5.1版本
预测库版本:
GIT COMMIT ID: bc9fd1f WITH_MKL: ON WITH_MKLDNN: ON WITH_GPU: OFF
被编译的代码:
#include "paddle_inference_api.h" int main() { // create a config and modify associated options paddle::NativeConfig config; config.model_dir = "xxx"; config.use_gpu = false; config.fraction_of_gpu_memory = 0.15; config.device = 0; // create a native PaddlePredictor auto predictor = paddle::CreatePaddlePredictor<paddle::NativeConfig>(config); return 0; }
编译和运行的命令(预测库解压到了~/fluid_inference/里面)
#!/bin/bash set -x # compile g++ -I ~/fluid_inference/paddle/include/ simple.cpp -L ~/fluid_inference/paddle/lib/ -lpaddle_fluid -Wl,-rpath ~/fluid_inference/third_party/install/mklml/lib/ -Wl,-rpath ~/fluid_inference/third_party/install/mkldnn/lib/ -Wl,-rpath ~/fluid_inference/paddle/lib/ # prepare running libs export LD_LIBRARY_PATH=~/fluid_inference/third_party/install/mklml/lib/:~/fluid_inference/third_party/install/mkldnn/lib/:~/fluid_inference/paddle/lib/ # run ./a.out
运行报错信息:
ERROR: unknown command line flag 'fraction_of_gpu_memory_to_use'
The text was updated successfully, but these errors were encountered:
这是用了只支持 CPU 的版本,需要用支持 GPU 的版本才会有这个选项
Sorry, something went wrong.
我下载的是 https://www.paddlepaddle.org.cn/documentation/docs/zh/1.5/advanced_usage/deploy/inference/build_and_install_lib_cn.html 里面的这个: 从version.txt里面看到,应该不包括gpu才对:
version.txt
GIT COMMIT ID: bc9fd1f WITH_MKL: ON WITH_MKLDNN: ON WITH_GPU: OFF 然后我把代码改成了下面这样也会报同样的错误
#include "paddle_inference_api.h" int main() { // create a config and modify associated options paddle::NativeConfig config; config.model_dir = "xxx"; auto predictor = paddle::CreatePaddlePredictor<paddle::NativeConfig>(config); return 0; }
我换了一个机器(百度云上面的BCC服务器): 系统环境:
Linux instance-d4ztutsr 4.13.0-41-generic #46~16.04.1-Ubuntu SMP Thu May 3 10:06:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.11)
编译参数里面加了个-std=c++11(否则编不过),也是同样的错误。
-std=c++11
Superjomn
No branches or pull requests
系统环境:
gcc版本:
预测库来源:
https://www.paddlepaddle.org.cn/documentation/docs/zh/1.5/advanced_usage/deploy/inference/build_and_install_lib_cn.html
从上面的链接里面下载的cpu_avx_mkl的1.5.1版本
预测库版本:
被编译的代码:
编译和运行的命令(预测库解压到了~/fluid_inference/里面)
运行报错信息:
The text was updated successfully, but these errors were encountered: