Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[LaneSeg] Fix a bug in cpp project #1741

Merged
merged 26 commits into from
Jan 24, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion contrib/LaneSeg/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -227,6 +227,6 @@ python deploy/python/infer.py --help
```

#### Paddle Inference(C++)
reference [Paddle Inference tutorial](../../deploy/cpp/)
reference [Paddle Inference tutorial](./deploy/cpp/README.md)

the C++ sources files of the project is in LaneSeg/deploy/cpp
2 changes: 1 addition & 1 deletion contrib/LaneSeg/README_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -230,6 +230,6 @@ python deploy/python/infer.py --help
```

#### Paddle Inference部署(C++)
参见[Paddle Inference部署教程](../../deploy/cpp/)
参见[Paddle Inference部署教程](./deploy/cpp/README_cn.md)

本项目使用的C++源文件在LaneSeg/deploy/cpp目录下
8 changes: 2 additions & 6 deletions contrib/LaneSeg/deploy/cpp/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -1,16 +1,12 @@
cmake_minimum_required(VERSION 3.0)
project(cpp_inference_demo CXX C)

option(WITH_MKL "Compile demo with MKL/OpenBlas support, default use MKL." OFF)
option(WITH_MKL "Compile demo with MKL/OpenBlas support, default use MKL." ON)
option(WITH_GPU "Compile demo with GPU/CPU, default use CPU." OFF)
option(WITH_STATIC_LIB "Compile demo with static/shared library, default use static." ON)
option(USE_TENSORRT "Compile demo with TensorRT." OFF)
option(WITH_ROCM "Compile demo with rocm." OFF)


set(PADDLE_LIB ${CMAKE_SOURCE_DIR}/paddle)
set(DEMO_NAME test_seg)

if(NOT WITH_STATIC_LIB)
add_definitions("-DPADDLE_WITH_SHARED_LIB")
else()
Expand Down Expand Up @@ -150,7 +146,7 @@ else()
endif()

if (NOT WIN32)
set(EXTERNAL_LIB "-ldl -lpthread")
set(EXTERNAL_LIB "-lrt -ldl -lpthread")
set(DEPS ${DEPS}
${MATH_LIB} ${MKLDNN_LIB}
glog gflags protobuf xxhash cryptopp
Expand Down
46 changes: 46 additions & 0 deletions contrib/LaneSeg/deploy/cpp/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
English | [简体中文](README_CN.md)

## Deploy the PaddleSeg model using Paddle Inference C++


### 1、Install

- Paddle Inference C++

- OpenCV

- Yaml

More install informations,please refer to [Tutorial](../../../../docs/deployment/inference/cpp_inference.md)。

### 2、Models and Pictures

- Downdload model

Enter to `LaneSeg/` directory, and execute commands as follows:
```shell
mkdir output # if not exists
wget -P output https://paddleseg.bj.bcebos.com/lane_seg/bisenet/model.pdparams
```
- Export Model

```shell
python export.py \
--config configs/bisenetV2_tusimple_640x368_300k.yml \
--model_path output/model.pdparams \
--save_dir output/export
```

- Using the image `data/test_images/3.jpg`

### 3、Compile and execute

Enter to the `LaneSeg/deploy/cpp`

Execute `sh run_seg_cpu.sh`, it will compile and then perform prediction on X86 CPU.

Execute `sh run_seg_gpu.sh`, it will compile and then perform prediction on Nvidia GPU.

The result will be saved in the`out_img_seg.jpg` and `out_image_points.jpg` images

- Note:For the path of the model and image, you can change the files `run_seg_cpu.sh` and `run_seg_gpu.sh` as needed
44 changes: 44 additions & 0 deletions contrib/LaneSeg/deploy/cpp/README_CN.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
简体中文 | [English](README.md)

## 使用Paddle Inference C++部署PaddleSeg模型

### 1、安装

- Paddle Inference C++

- OpenCV

- Yaml

更多的安装信息,请参考[教程](../../../../docs/deployment/inference/cpp_inference_cn.md)。

### 2、模型和图片

- 下载模型

进入`LaneSeg/`目录下,执行如下命令:
```shell
mkdir output # if not exists
wget -P output https://paddleseg.bj.bcebos.com/lane_seg/bisenet/model.pdparams
```
- 导出模型
```shell
python export.py \
--config configs/bisenetV2_tusimple_640x368_300k.yml \
--model_path output/model.pdparams \
--save_dir output/export
```

- 图片使用 `data/test_images/3.jpg`

### 3、编译、执行

进入目录`LaneSeg/deploy/cpp`

执行`sh run_seg_cpu.sh`,会进行编译,然后在X86 CPU上执行预测。

执行`sh run_seg_gpu.sh`,会进行编译,然后在Nvidia GPU上执行预测。

结果会保存在当前目录的`out_img_seg.jpg`和`out_image_points.jpg`图片。

- 注意:对于模型和图片的路径,可以按需要对文件`run_seg_cpu.sh`和`run_seg_gpu.sh`进行修改。
35 changes: 35 additions & 0 deletions contrib/LaneSeg/deploy/cpp/run_seg_cpu.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
#!/bin/bash
set +x
set -e

WITH_MKL=ON
WITH_GPU=OFF
USE_TENSORRT=OFF
DEMO_NAME=test_seg

work_path=$(dirname $(readlink -f $0))
LIB_DIR="${work_path}/paddle_inference"

# compile
mkdir -p build
cd build
rm -rf *

cmake .. \
-DDEMO_NAME=${DEMO_NAME} \
-DWITH_MKL=${WITH_MKL} \
-DWITH_GPU=${WITH_GPU} \
-DUSE_TENSORRT=${USE_TENSORRT} \
-DWITH_STATIC_LIB=OFF \
-DPADDLE_LIB=${LIB_DIR}

make -j

# run
cd ..
# change model_dir and img_path according to your needs
./build/test_seg \
--model_dir=../../output/export/ \
--img_path=../../data/test_images/3.jpg \
--use_cpu=true \
--use_mkldnn=true
34 changes: 34 additions & 0 deletions contrib/LaneSeg/deploy/cpp/run_seg_gpu.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
#!/bin/bash
set +x
set -e

WITH_MKL=ON
WITH_GPU=ON
USE_TENSORRT=OFF
DEMO_NAME=test_seg

work_path=$(dirname $(readlink -f $0))
LIB_DIR="${work_path}/paddle_inference"

# compile
mkdir -p build
cd build
rm -rf *

cmake .. \
-DDEMO_NAME=${DEMO_NAME} \
-DWITH_MKL=${WITH_MKL} \
-DWITH_GPU=${WITH_GPU} \
-DUSE_TENSORRT=${USE_TENSORRT} \
-DWITH_STATIC_LIB=OFF \
-DPADDLE_LIB=${LIB_DIR}

make -j

# run
cd ..
# change model_dir and img_path according to your needs
./build/test_seg \
--model_dir=../../output/export/ \
--img_path=../../data/test_images/3.jpg \
--use_cpu=false
39 changes: 28 additions & 11 deletions contrib/LaneSeg/deploy/cpp/src/test_seg.cc
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,8 @@ DEFINE_string(model_dir, "", "Directory of the inference model. "
"It constains deploy.yaml and infer models");
DEFINE_string(img_path, "", "Path of the test image.");
DEFINE_bool(use_cpu, false, "Wether use CPU. Default: use GPU.");

DEFINE_bool(use_trt, false, "Wether enable TensorRT when use GPU. Defualt: false.");
DEFINE_bool(use_mkldnn, false, "Wether enable MKLDNN when use CPU. Defualt: false.");
DEFINE_string(save_dir, "", "Directory of the output image.");

typedef struct YamlConfig {
Expand Down Expand Up @@ -77,6 +78,22 @@ std::shared_ptr<paddle_infer::Predictor> create_predictor(
model_dir + "/" + yaml_config.params_file);
infer_config.EnableMemoryOptim();

if (FLAGS_use_cpu) {
LOG(INFO) << "Use CPU";
if (FLAGS_use_mkldnn) {
// TODO(jc): fix the bug
//infer_config.EnableMKLDNN();
infer_config.SetCpuMathLibraryNumThreads(5);
}
} else {
LOG(INFO) << "Use GPU";
infer_config.EnableUseGpu(100, 0);
if (FLAGS_use_trt) {
infer_config.EnableTensorRtEngine(1 << 20, 1, 3,
paddle_infer::PrecisionType::kFloat32, false, false);
}
}

auto predictor = paddle_infer::CreatePredictor(infer_config);
return predictor;
}
Expand All @@ -100,12 +117,13 @@ void process_image(const YamlConfig& yaml_config, cv::Mat& img) {
}
}


int main(int argc, char *argv[]) {
google::ParseCommandLineFlags(&argc, &argv, true);
if (FLAGS_model_dir == "") {
LOG(FATAL) << "The model_dir should not be empty.";
LOG(FATAL) << "The model_dir should not be empty.";
}

// Load yaml
std::string yaml_path = FLAGS_model_dir + "/deploy.yaml";
YamlConfig yaml_config = load_yaml(yaml_path);
Expand Down Expand Up @@ -144,13 +162,13 @@ int main(int argc, char *argv[]) {
auto output_t = predictor->GetOutputHandle(output_names[0]);
std::vector<int> output_shape = output_t->shape();
int out_num = std::accumulate(output_shape.begin(), output_shape.end(), 1,
std::multiplies<int>());
std::multiplies<int>());
std::vector<float> out_data(out_num);
output_t->CopyToCpu(out_data.data());

cv::Size size = cv::Size(cols, rows);
int skip_index = size.height * size.width;

const int num_classes = 7;
LanePostProcess* laneNet = new LanePostProcess(input_height, input_width, rows, cols, num_classes);
auto lane_coords = laneNet->lane_process(out_data, cut_height);
Expand All @@ -170,10 +188,9 @@ int main(int argc, char *argv[]) {
}
lane_id++;
}

cv::imshow("image lane", image_ori);
cv::waitKey();


cv::imwrite("out_image_points.jpg", image_ori);

cv::Mat seg_planes[num_classes];
for(int i = 0; i < num_classes; i++) {
seg_planes[i].create(size, CV_32FC(1));
Expand All @@ -200,7 +217,7 @@ int main(int argc, char *argv[]) {
// Get pseudo image
cv::Mat out_eq_img;
cv::equalizeHist(binary_image, out_eq_img);
cv::imwrite("out_img.jpg", binary_image*255);
cv::imwrite("out_img_seg.jpg", binary_image*255);

LOG(INFO) << "Finish";
}