Skip to content

Add external model's example code and Docs. #102

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 107 commits into from
Aug 12, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
107 commits
Select commit Hold shift + click to select a range
1684b05
first commit for yolov7
ziqi-jin Jul 13, 2022
71c00d9
pybind for yolov7
ziqi-jin Jul 14, 2022
21ab2f9
CPP README.md
ziqi-jin Jul 14, 2022
d63e862
CPP README.md
ziqi-jin Jul 14, 2022
7b3b0e2
modified yolov7.cc
ziqi-jin Jul 14, 2022
d039e80
README.md
ziqi-jin Jul 15, 2022
a34a815
python file modify
ziqi-jin Jul 18, 2022
eb010a8
merge test
ziqi-jin Jul 18, 2022
39f64f2
delete license in fastdeploy/
ziqi-jin Jul 18, 2022
d071b37
repush the conflict part
ziqi-jin Jul 18, 2022
d5026ca
README.md modified
ziqi-jin Jul 18, 2022
fb376ad
README.md modified
ziqi-jin Jul 18, 2022
4b8737c
file path modified
ziqi-jin Jul 18, 2022
ce922a0
file path modified
ziqi-jin Jul 18, 2022
6e00b82
file path modified
ziqi-jin Jul 18, 2022
8c359fb
file path modified
ziqi-jin Jul 18, 2022
906c730
file path modified
ziqi-jin Jul 18, 2022
80c1223
README modified
ziqi-jin Jul 18, 2022
6072757
README modified
ziqi-jin Jul 18, 2022
2c6e6a4
move some helpers to private
ziqi-jin Jul 18, 2022
48136f0
add examples for yolov7
ziqi-jin Jul 18, 2022
6feca92
api.md modified
ziqi-jin Jul 18, 2022
ae70d4f
api.md modified
ziqi-jin Jul 18, 2022
f591b85
api.md modified
ziqi-jin Jul 18, 2022
f0def41
YOLOv7
ziqi-jin Jul 18, 2022
15b9160
yolov7 release link
ziqi-jin Jul 18, 2022
4706e8c
yolov7 release link
ziqi-jin Jul 18, 2022
dc83584
yolov7 release link
ziqi-jin Jul 18, 2022
086debd
copyright
ziqi-jin Jul 18, 2022
4f980b9
change some helpers to private
ziqi-jin Jul 18, 2022
2e61c95
Merge branch 'develop' into develop
ziqi-jin Jul 19, 2022
80beadf
change variables to const and fix documents.
ziqi-jin Jul 19, 2022
8103772
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Jul 19, 2022
f5f7a86
gitignore
ziqi-jin Jul 19, 2022
e6cec25
Transfer some funtions to private member of class
ziqi-jin Jul 19, 2022
e25e4f2
Transfer some funtions to private member of class
ziqi-jin Jul 19, 2022
e8a8439
Merge from develop (#9)
ziqi-jin Jul 20, 2022
a182893
first commit for yolor
ziqi-jin Jul 20, 2022
3aa015f
for merge
ziqi-jin Jul 20, 2022
d6b98aa
Develop (#11)
ziqi-jin Jul 20, 2022
871cfc6
Merge branch 'yolor' into develop
ziqi-jin Jul 20, 2022
013921a
Yolor (#16)
ziqi-jin Jul 21, 2022
7a5a6d9
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Jul 21, 2022
c996117
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Jul 22, 2022
0aefe32
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Jul 26, 2022
2330414
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Jul 26, 2022
4660161
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Jul 27, 2022
033c18e
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Jul 28, 2022
6c94d65
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Jul 28, 2022
85fb256
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Jul 29, 2022
90ca4cb
add is_dynamic for YOLO series (#22)
ziqi-jin Jul 29, 2022
f6a4ed2
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 1, 2022
3682091
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 3, 2022
ca1e110
Merge remote-tracking branch 'upstream/develop' into develop
ziqi-jin Aug 8, 2022
93ba6a6
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 9, 2022
767842e
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 10, 2022
cc32733
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 10, 2022
3590990
first commit test photo
ziqi-jin Aug 10, 2022
09c64ef
yolov7 doc
ziqi-jin Aug 10, 2022
1915499
yolov7 doc
ziqi-jin Aug 10, 2022
c3cd455
yolov7 doc
ziqi-jin Aug 10, 2022
4c05253
yolov7 doc
ziqi-jin Aug 10, 2022
49486ce
add yolov5 docs
ziqi-jin Aug 10, 2022
6034490
modify yolov5 doc
ziqi-jin Aug 10, 2022
08bf982
first commit for retinaface
ziqi-jin Aug 10, 2022
5e95180
first commit for retinaface
ziqi-jin Aug 10, 2022
80d1ca6
firt commit for ultraface
ziqi-jin Aug 10, 2022
4b97d57
firt commit for ultraface
ziqi-jin Aug 10, 2022
7924e7e
firt commit for yolov5face
ziqi-jin Aug 10, 2022
1728856
firt commit for modnet and arcface
ziqi-jin Aug 10, 2022
3d83654
firt commit for modnet and arcface
ziqi-jin Aug 10, 2022
b53179d
first commit for partial_fc
ziqi-jin Aug 10, 2022
d2a12d1
first commit for partial_fc
ziqi-jin Aug 10, 2022
2aecb96
first commit for yolox
ziqi-jin Aug 10, 2022
7165e0e
first commit for yolov6
ziqi-jin Aug 10, 2022
1c7a578
first commit for nano_det
ziqi-jin Aug 10, 2022
d6a4289
first commit for scrfd
ziqi-jin Aug 11, 2022
dba0368
first commit for scrfd
ziqi-jin Aug 11, 2022
ed5ea33
first commit for retinaface
ziqi-jin Aug 11, 2022
53a89a7
first commit for ultraface
ziqi-jin Aug 11, 2022
c335b6b
first commit for yolov5face
ziqi-jin Aug 11, 2022
2771a3b
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 11, 2022
6ce781e
first commit for yolox yolov6 nano
ziqi-jin Aug 11, 2022
e1c6520
rm jpg
ziqi-jin Aug 11, 2022
483c4ef
first commit for modnet and modify nano
ziqi-jin Aug 11, 2022
a1e29ac
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 11, 2022
5ecc6fe
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 11, 2022
f72fef7
Merge branch 'test_pr_infer' into test_pr
ziqi-jin Aug 11, 2022
217bb39
yolor scaledyolov4 v5lite
ziqi-jin Aug 11, 2022
de3b4de
first commit for insightface
ziqi-jin Aug 11, 2022
eecf5ce
first commit for insightface
ziqi-jin Aug 11, 2022
c04773f
first commit for insightface
ziqi-jin Aug 11, 2022
6a7ed21
docs
ziqi-jin Aug 12, 2022
46497fb
docs
ziqi-jin Aug 12, 2022
fa29760
docs
ziqi-jin Aug 12, 2022
d76a4b7
docs
ziqi-jin Aug 12, 2022
569b4e4
docs
ziqi-jin Aug 12, 2022
1c6c992
add print for detect and modify docs
ziqi-jin Aug 12, 2022
16eb939
docs
ziqi-jin Aug 12, 2022
7219015
docs
ziqi-jin Aug 12, 2022
a1fbc84
docs
ziqi-jin Aug 12, 2022
9ddb5d2
docs test for insightface
ziqi-jin Aug 12, 2022
af3d357
docs test for insightface again
ziqi-jin Aug 12, 2022
eb0b421
docs test for insightface
ziqi-jin Aug 12, 2022
bc53d99
modify all wrong expressions in docs
ziqi-jin Aug 12, 2022
87cb228
modify all wrong expressions in docs
ziqi-jin Aug 12, 2022
b9ef49d
Merge branch 'develop' into test_pr
jiangjiajun Aug 12, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion docs/api/vision_results/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,5 +4,7 @@ FastDeploy根据视觉模型的任务类型,定义了不同的结构体(`csrcs

| 结构体 | 文档 | 说明 | 相应模型 |
| :----- | :--- | :---- | :------- |
| ClassificationResult | [C++/Python文档](./classificiation_result.md) | 图像分类返回结果 | ResNet50、MobileNetV3等 |
| ClassificationResult | [C++/Python文档](./classification_result.md) | 图像分类返回结果 | ResNet50、MobileNetV3等 |
| DetectionResult | [C++/Python文档](./detection_result.md) | 目标检测返回结果 | PPYOLOE、YOLOv7系列模型等 |
| FaceDetectionResult | [C++/Python文档](./face_detection_result.md) | 目标检测返回结果 | PPYOLOE、YOLOv7系列模型等 |
| MattingResult | [C++/Python文档](./matting_result.md) | 目标检测返回结果 | PPYOLOE、YOLOv7系列模型等 |
34 changes: 34 additions & 0 deletions docs/api/vision_results/face_detection_result.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
# FaceDetectionResult 人脸检测结果

FaceDetectionResult 代码定义在`csrcs/fastdeploy/vision/common/result.h`中,用于表明图像检测出来的目标框、目标类别和目标置信度。

## C++ 结构体

`fastdeploy::vision::FaceDetectionResult`

```
struct FaceDetectionResult {
std::vector<std::array<float, 4>> boxes;
std::vector<std::array<float, 2>> landmarks;
std::vector<float> scores;
ResultType type = ResultType::FACE_DETECTION;
int landmarks_per_face;
void Clear();
std::string Str();
};
```

- **boxes**: 成员变量,表示单张图片检测出来的所有目标框坐标,`boxes.size()`表示框的个数,每个框以4个float数值依次表示xmin, ymin, xmax, ymax, 即左上角和右下角坐标
- **scores**: 成员变量,表示单张图片检测出来的所有目标置信度,其元素个数与`boxes.size()`一致
- **landmarks**: 成员变量,表示单张图片检测出来的所有人脸的关键点,其元素个数与`boxes.size()`一致
- **landmarks_per_face**: 成员变量,表示每个人脸框中的关键点的数量。
- **Clear()**: 成员函数,用于清除结构体中存储的结果
- **Str()**: 成员函数,将结构体中的信息以字符串形式输出(用于Debug)

## Python结构体

`fastdeploy.vision.FaceDetectionResult`

- **boxes**(list of list(float)): 成员变量,表示单张图片检测出来的所有目标框坐标。boxes是一个list,其每个元素为一个长度为4的list, 表示为一个框,每个框以4个float数值依次表示xmin, ymin, xmax, ymax, 即左上角和右下角坐标
- **scores**(list of float): 成员变量,表示单张图片检测出来的所有目标置信度
- **landmarks**: 成员变量,表示单张图片检测出来的所有人脸的关键点
35 changes: 35 additions & 0 deletions docs/api/vision_results/matting_result.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# MattingResult 抠图结果

MattingResult 代码定义在`csrcs/fastdeploy/vision/common/result.h`中,用于表明图像检测出来的目标框、目标类别和目标置信度。

## C++ 结构体

`fastdeploy::vision::MattingResult`

```
struct MattingResult {
std::vector<float> alpha; // h x w
std::vector<float> foreground; // h x w x c (c=3 default)
std::vector<int64_t> shape;
bool contain_foreground = false;
void Clear();
std::string Str();
};
```

- **alpha**: 是一维向量,为预测的alpha透明度的值,值域为[0.,1.],长度为hxw,h,w为输入图像的高和宽
- **foreground**: 是一维向量,为预测的前景,值域为[0.,255.],长度为hxwxc,h,w为输入图像的高和宽,c一般为3,foreground不是一定有的,只有模型本身预测了前景,这个属性才会有效
- **contain_foreground**: 表示预测的结果是否包含前景
- **shape**: 表示输出结果的shape,当contain_foreground为false,shape只包含(h,w),当contain_foreground为true,shape包含(h,w,c), c一般为3
- **Clear()**: 成员函数,用于清除结构体中存储的结果
- **Str()**: 成员函数,将结构体中的信息以字符串形式输出(用于Debug)


## Python结构体

`fastdeploy.vision.MattingResult`

- **alpha**: 是一维向量,为预测的alpha透明度的值,值域为[0.,1.],长度为hxw,h,w为输入图像的高和宽
- **foreground**: 是一维向量,为预测的前景,值域为[0.,255.],长度为hxwxc,h,w为输入图像的高和宽,c一般为3,foreground不是一定有的,只有模型本身预测了前景,这个属性才会有效
- **contain_foreground**: 表示预测的结果是否包含前景
- **shape**: 表示输出结果的shape,当contain_foreground为false,shape只包含(h,w),当contain_foreground为true,shape包含(h,w,c), c一般为3
9 changes: 5 additions & 4 deletions examples/vision/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,11 @@

| 任务类型 | 说明 | 预测结果结构体 |
|:-------------- |:----------------------------------- |:-------------------------------------------------------------------------------- |
| Detection | 目标检测,输入图像,检测图像中物体位置,并返回检测框坐标及类别和置信度 | [DetectionResult](../../../../docs/api/vision_results/detection_result.md) |
| Segmentation | 语义分割,输入图像,给出图像中每个像素的分类及置信度 | [SegmentationResult](../../../../docs/api/vision_results/segmentation_result.md) |
| Classification | 图像分类,输入图像,给出图像的分类结果和置信度 | [ClassifyResult](../../../../docs/api/vision_results/classification_result.md) |

| Detection | 目标检测,输入图像,检测图像中物体位置,并返回检测框坐标及类别和置信度 | [DetectionResult](../../docs/api/vision_results/detection_result.md) |
| Segmentation | 语义分割,输入图像,给出图像中每个像素的分类及置信度 | [SegmentationResult](../../docs/api/vision_results/segmentation_result.md) |
| Classification | 图像分类,输入图像,给出图像的分类结果和置信度 | [ClassifyResult](../../docs/api/vision_results/classification_result.md) |
| FaceDetection | 人脸检测,输入图像,检测图像中人脸位置,并返回检测框坐标及人脸关键点 | [FaceDetectionResult](../../docs/api/vision_results/face_detection_result.md) |
| Matting | 抠图,输入图像,返回图片的前景每个像素点的Alpha值 | [MattingResult](../../docs/api/vision_results/matting_result.md) |
## FastDeploy API设计

视觉模型具有较有统一任务范式,在设计API时(包括C++/Python),FastDeploy将视觉模型的部署拆分为四个步骤
Expand Down
3 changes: 2 additions & 1 deletion examples/vision/classification/paddleclas/python/infer.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,7 @@ def build_option(args):

# 配置runtime,加载模型
runtime_option = build_option(args)

model_file = os.path.join(args.model, "inference.pdmodel")
params_file = os.path.join(args.model, "inference.pdiparams")
config_file = os.path.join(args.model, "inference_cls.yaml")
Expand All @@ -51,5 +52,5 @@ def build_option(args):

# 预测图片分类结果
im = cv2.imread(args.image)
result = model.predict(im, args.topk)
result = model.predict(im.copy(), args.topk)
print(result)
17 changes: 9 additions & 8 deletions examples/vision/detection/README.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,14 @@
# 目标检测模型
人脸检测模型

FastDeploy目前支持如下目标检测模型部署

| 模型 | 说明 | 模型格式 | 版本 |
| :--- | :--- | :------- | :--- |
| [PaddleDetection/PPYOLOE](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/ppyoloe) | PPYOLOE系列模型 | Paddle | [Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) |
| [PaddleDetection/PicoDet](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/ppyoloe) | PicoDet系列模型 | Paddle | [Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) |
| [PaddleDetection/YOLOX](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/ppyoloe) | Paddle版本的YOLOX系列模型 | Paddle | [Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) |
| [PaddleDetection/YOLOv3](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/ppyoloe) | YOLOv3系列模型 | Paddle | [Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) |
| [PaddleDetection/PPYOLO](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/ppyoloe) | PPYOLO系列模型 | Paddle | [Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) |
| [PaddleDetection/FasterRCNN](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/ppyoloe) | FasterRCNN系列模型 | Paddle | [Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) |
| [WongKinYiu/YOLOv7](https://github.com/WongKinYiu/yolov7) | YOLOv7、YOLOv7-X等系列模型 | ONNX | [v0.1](https://github.com/WongKinYiu/yolov7/tree/v0.1) |
| [nanodet_plus](./nanodet_plus) | NanoDetPlus系列模型 | ONNX | Release/v1.0.0-alpha-1 |
| [yolov5](./yolov5) | YOLOv5系列模型 | ONNX | Release/v6.0 |
| [yolov5lite](./yolov5lite) | YOLOv5-Lite系列模型 | ONNX | Release/v1.4 |
| [yolov6](./yolov6) | YOLOv6系列模型 | ONNX | Release/0.1.0 |
| [yolov7](./yolov7) | YOLOv7系列模型 | ONNX | Release/0.1 |
| [yolor](./yolor) | YOLOR系列模型 | ONNX | Release/weights |
| [yolox](./yolox) | YOLOX系列模型 | ONNX | Release/v0.1.1 |
| [scaledyolov4](./scaledyolov4) | ScaledYOLOv4系列模型 | ONNX | CommitID:6768003 |
11 changes: 8 additions & 3 deletions examples/vision/detection/nanodet_plus/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@

## 模型版本说明

- NanoDetPlus部署实现来自[NanoDetPlus v1.0.0-alpha-1](https://github.com/RangiLyu/nanodet/tree/v1.0.0-alpha-1) 分支代码,基于coco的[预训练模型](https://github.com/RangiLyu/nanodet/releases/tag/v1.0.0-alpha-1)。
- NanoDetPlus部署实现来自[NanoDetPlus](https://github.com/RangiLyu/nanodet/tree/v1.0.0-alpha-1) 的代码,基于coco的[预训练模型](https://github.com/RangiLyu/nanodet/releases/tag/v1.0.0-alpha-1)。

- (1)[预训练模型](https://github.com/RangiLyu/nanodet/releases/tag/v1.0.0-alpha-1)的*.onnx可直接进行部署;
- (2)自己训练的模型,导出ONNX模型后,参考[详细部署教程](#详细部署文档)完成部署。
- (2)自己训练的模型,导出ONNX模型后,参考[详细部署文档](#详细部署文档)完成部署。

## 下载预训练ONNX模型

为了方便开发者的测试,下面提供了NanoDetPlus导出的各系列模型,开发者可直接下载使用。
Expand All @@ -21,3 +21,8 @@

- [Python部署](python)
- [C++部署](cpp)


## 版本说明

- 本版本文档和代码基于[NanoDetPlus v1.0.0-alpha-1](https://github.com/RangiLyu/nanodet/tree/v1.0.0-alpha-1) 编写
17 changes: 10 additions & 7 deletions examples/vision/detection/nanodet_plus/cpp/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
```
mkdir build
cd build
wget https://xxx.tgz
wget https://https://bj.bcebos.com/paddlehub/fastdeploy/cpp/fastdeploy-linux-x64-gpu-0.2.0.tgz
tar xvf fastdeploy-linux-x64-0.2.0.tgz
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-0.2.0
make -j
Expand All @@ -32,7 +32,7 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000

运行完成可视化结果如下图所示

<img width="640" src="https://user-images.githubusercontent.com/67993288/183847558-abcd9a57-9cd9-4891-b09a-710963c99b74.jpg">
<img width="640" src="https://user-images.githubusercontent.com/67993288/184301689-87ee5205-2eff-4204-b615-24c400f01323.jpg">

## NanoDetPlus C++接口

Expand Down Expand Up @@ -74,11 +74,14 @@ NanoDetPlus模型加载和初始化,其中model_file为导出的ONNX模型格

### 类成员变量

> > * **size**(vector&lt;int&gt;): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640]
> > * **padding_value**(vector&lt;float&gt;): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114]
> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=ture` 表示不使用填充的方式,默认值为`is_no_pad=false`
> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=false`
> > * **stride**(int): 配合`stris_mini_pad`成员变量使用, 默认值为`stride=32`
#### 预处理参数
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果

> > * **size**(vector&lt;int&gt;): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[320, 320]
> > * **padding_value**(vector&lt;float&gt;): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[0, 0, 0]
> > * **keep_ratio**(bool): 通过此参数指定resize时是否保持宽高比例不变,默认是fasle.
> > * **reg_max**(int): GFL回归中的reg_max参数,默认是7.
> > * **downsample_strides**(vector&lt;int&gt;): 通过此参数可以修改生成anchor的特征图的下采样倍数, 包含三个整型元素, 分别表示默认的生成anchor的下采样倍数, 默认值为[8, 16, 32, 64]

- [模型介绍](../../)
- [Python部署](../python)
Expand Down
109 changes: 109 additions & 0 deletions examples/vision/detection/nanodet_plus/cpp/infer.cc
Original file line number Diff line number Diff line change
@@ -0,0 +1,109 @@
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#include "fastdeploy/vision.h"

void CpuInfer(const std::string& model_file, const std::string& image_file) {
auto model = fastdeploy::vision::detection::NanoDetPlus(model_file);
if (!model.Initialized()) {
std::cerr << "Failed to initialize." << std::endl;
return;
}

auto im = cv::imread(image_file);
auto im_bak = im.clone();

fastdeploy::vision::DetectionResult res;
if (!model.Predict(&im, &res)) {
std::cerr << "Failed to predict." << std::endl;
return;
}
std::cout << res.Str() << std::endl;
auto vis_im = fastdeploy::vision::Visualize::VisDetection(im_bak, res);
cv::imwrite("vis_result.jpg", vis_im);
std::cout << "Visualized result saved in ./vis_result.jpg" << std::endl;
}

void GpuInfer(const std::string& model_file, const std::string& image_file) {
auto option = fastdeploy::RuntimeOption();
option.UseGpu();
auto model =
fastdeploy::vision::detection::NanoDetPlus(model_file, "", option);
if (!model.Initialized()) {
std::cerr << "Failed to initialize." << std::endl;
return;
}

auto im = cv::imread(image_file);
auto im_bak = im.clone();

fastdeploy::vision::DetectionResult res;
if (!model.Predict(&im, &res)) {
std::cerr << "Failed to predict." << std::endl;
return;
}
std::cout << res.Str() << std::endl;

auto vis_im = fastdeploy::vision::Visualize::VisDetection(im_bak, res);
cv::imwrite("vis_result.jpg", vis_im);
std::cout << "Visualized result saved in ./vis_result.jpg" << std::endl;
}

void TrtInfer(const std::string& model_file, const std::string& image_file) {
auto option = fastdeploy::RuntimeOption();
option.UseGpu();
option.UseTrtBackend();
option.SetTrtInputShape("images", {1, 3, 320, 320});
auto model =
fastdeploy::vision::detection::NanoDetPlus(model_file, "", option);
if (!model.Initialized()) {
std::cerr << "Failed to initialize." << std::endl;
return;
}

auto im = cv::imread(image_file);
auto im_bak = im.clone();

fastdeploy::vision::DetectionResult res;
if (!model.Predict(&im, &res)) {
std::cerr << "Failed to predict." << std::endl;
return;
}
std::cout << res.Str() << std::endl;

auto vis_im = fastdeploy::vision::Visualize::VisDetection(im_bak, res);
cv::imwrite("vis_result.jpg", vis_im);
std::cout << "Visualized result saved in ./vis_result.jpg" << std::endl;
}

int main(int argc, char* argv[]) {
if (argc < 4) {
std::cout << "Usage: infer_demo path/to/model path/to/image run_option, "
"e.g ./infer_model ./nanodet-plus-m_320.onnx ./test.jpeg 0"
<< std::endl;
std::cout << "The data type of run_option is int, 0: run with cpu; 1: run "
"with gpu; 2: run with gpu and use tensorrt backend."
<< std::endl;
return -1;
}

if (std::atoi(argv[3]) == 0) {
CpuInfer(argv[1], argv[2]);
} else if (std::atoi(argv[3]) == 1) {
GpuInfer(argv[1], argv[2]);
} else if (std::atoi(argv[3]) == 2) {
TrtInfer(argv[1], argv[2]);
}
return 0;
}
16 changes: 9 additions & 7 deletions examples/vision/detection/nanodet_plus/python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ python infer.py --model nanodet-plus-m_320.onnx --image 000000014439.jpg --devic

运行完成可视化结果如下图所示

<img width="640" src="https://user-images.githubusercontent.com/67993288/183847558-abcd9a57-9cd9-4891-b09a-710963c99b74.jpg">
<img width="640" src="https://user-images.githubusercontent.com/67993288/184301689-87ee5205-2eff-4204-b615-24c400f01323.jpg">

## NanoDetPlus Python接口

Expand Down Expand Up @@ -62,12 +62,14 @@ NanoDetPlus模型加载和初始化,其中model_file为导出的ONNX模型格
> > 返回`fastdeploy.vision.DetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)

### 类成员属性

> > * **size**(list[int]): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640]
> > * **padding_value**(list[float]): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114]
> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=True` 表示不使用填充的方式,默认值为`is_no_pad=False`
> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=False`
> > * **stride**(int): 配合`stris_mini_padide`成员变量使用, 默认值为`stride=32`
#### 预处理参数
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果

> > * **size**(list[int]): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[320, 320]
> > * **padding_value**(list[float]): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[0, 0, 0]
> > * **keep_ratio**(bool): 通过此参数指定resize时是否保持宽高比例不变,默认是fasle.
> > * **reg_max**(int): GFL回归中的reg_max参数,默认是7.
> > * **downsample_strides**(list[int]): 通过此参数可以修改生成anchor的特征图的下采样倍数, 包含三个整型元素, 分别表示默认的生成anchor的下采样倍数, 默认值为[8, 16, 32, 64]



Expand Down
Loading