Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

预测时候报错,关于fraction_of_gpu_memory_to_use #19127

Closed
houj04 opened this issue Aug 12, 2019 · 3 comments
Closed

预测时候报错,关于fraction_of_gpu_memory_to_use #19127

houj04 opened this issue Aug 12, 2019 · 3 comments
Assignees
Labels
status/close 已关闭

Comments

@houj04
Copy link
Contributor

houj04 commented Aug 12, 2019

系统环境:

Linux ubuntu 5.0.0-23-generic #24~18.04.1-Ubuntu SMP Mon Jul 29 16:12:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

gcc版本:

gcc version 7.4.0 (Ubuntu 7.4.0-1ubuntu1~18.04.1)

预测库来源:
https://www.paddlepaddle.org.cn/documentation/docs/zh/1.5/advanced_usage/deploy/inference/build_and_install_lib_cn.html
从上面的链接里面下载的cpu_avx_mkl的1.5.1版本

预测库版本:

GIT COMMIT ID: bc9fd1f
WITH_MKL: ON
WITH_MKLDNN: ON
WITH_GPU: OFF

被编译的代码:

#include "paddle_inference_api.h"
int main() {
    // create a config and modify associated options
    paddle::NativeConfig config;
    config.model_dir = "xxx";
    config.use_gpu = false;
    config.fraction_of_gpu_memory = 0.15;
    config.device = 0;
    // create a native PaddlePredictor
    auto predictor = paddle::CreatePaddlePredictor<paddle::NativeConfig>(config);
    return 0;
}

编译和运行的命令(预测库解压到了~/fluid_inference/里面)

#!/bin/bash
set -x
# compile
g++ -I ~/fluid_inference/paddle/include/ simple.cpp -L ~/fluid_inference/paddle/lib/ -lpaddle_fluid -Wl,-rpath ~/fluid_inference/third_party/install/mklml/lib/ -Wl,-rpath ~/fluid_inference/third_party/install/mkldnn/lib/ -Wl,-rpath ~/fluid_inference/paddle/lib/
# prepare running libs
export LD_LIBRARY_PATH=~/fluid_inference/third_party/install/mklml/lib/:~/fluid_inference/third_party/install/mkldnn/lib/:~/fluid_inference/paddle/lib/
# run
./a.out

运行报错信息:

ERROR: unknown command line flag 'fraction_of_gpu_memory_to_use'

@Superjomn
Copy link
Contributor

这是用了只支持 CPU 的版本,需要用支持 GPU 的版本才会有这个选项

@houj04
Copy link
Contributor Author

houj04 commented Aug 12, 2019

我下载的是
https://www.paddlepaddle.org.cn/documentation/docs/zh/1.5/advanced_usage/deploy/inference/build_and_install_lib_cn.html
里面的这个:
图片
version.txt里面看到,应该不包括gpu才对:

GIT COMMIT ID: bc9fd1f
WITH_MKL: ON
WITH_MKLDNN: ON
WITH_GPU: OFF
然后我把代码改成了下面这样也会报同样的错误

#include "paddle_inference_api.h"
int main() {
    // create a config and modify associated options
    paddle::NativeConfig config;
    config.model_dir = "xxx";
    auto predictor = paddle::CreatePaddlePredictor<paddle::NativeConfig>(config);
    return 0;
}

@houj04
Copy link
Contributor Author

houj04 commented Aug 12, 2019

我换了一个机器(百度云上面的BCC服务器):
系统环境:

Linux instance-d4ztutsr 4.13.0-41-generic #46~16.04.1-Ubuntu SMP Thu May 3 10:06:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

gcc版本:

gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.11)

编译参数里面加了个-std=c++11(否则编不过),也是同样的错误。

@paddle-bot paddle-bot bot added the status/close 已关闭 label Jan 11, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status/close 已关闭
Projects
None yet
Development

No branches or pull requests

2 participants