diff --git a/README_CN.md b/README_CN.md index 30e50f4c5d..00a6fe1eda 100644 --- a/README_CN.md +++ b/README_CN.md @@ -39,7 +39,7 @@ - 🔥 **2022.11.8:Release FastDeploy [release v0.6.0](https://github.com/PaddlePaddle/FastDeploy/tree/release/0.6.0)** - - **🖥️ 服务端部署:支持推理速度更快的后端,支持更多的模型** + - **🖥️ 服务端部署:支持推理速度更快的后端,支持更多的模型** - 优化 YOLO系列、PaddleClas、PaddleDetection 前后处理内存创建逻辑; - 融合视觉预处理操作,优化PaddleClas、PaddleDetection预处理性能,提升端到端推理性能; - 服务化部署新增Clone接口支持,降低Paddle Inference/TensorRT/OpenVINO后端在多实例下内存/显存的使用; @@ -47,13 +47,13 @@ - **📲 移动端和端侧部署:移动端后端能力升级,支持更多的CV模型** - 集成 RKNPU2 后端,并提供与 Paddle Inference、Paddle Inference TensorRT、TensorRT、OpenVINO、ONNX Runtime、Paddle Lite 等推理后端一致的开发体验; - 支持 [PP-HumanSeg](./examples/vision/segmentation/paddleseg/rknpu2)、[Unet](./examples/vision/segmentation/paddleseg/rknpu2)、[PicoDet](examples/vision/detection/paddledetection/rknpu2)、[SCRFD](./examples/vision/facedet/scrfd/rknpu2) 等在NPU高需求的特色模型。 - + - [**more releases information**](./releases) ## 目录 *
📖 文档教程(点击可收缩)
- + - 安装文档 - [预编译库下载安装](docs/cn/build_and_install/download_prebuilt_libraries.md) - [GPU部署环境编译安装](docs/cn/build_and_install/gpu.md) @@ -112,7 +112,7 @@ ```bash pip install numpy opencv-python fastdeploy-gpu-python -f https://www.paddlepaddle.org.cn/whl/fastdeploy.html ``` -##### [Conda安装(推荐)](docs/quick_start/Python_prebuilt_wheels.md) +##### [Conda安装(推荐)](docs/cn/build_and_install/download_prebuilt_libraries.md) ```bash conda config --add channels conda-forge && conda install cudatoolkit=11.2 cudnn=8.2 ``` @@ -155,7 +155,7 @@ cv2.imwrite("vis_image.jpg", vis_im)
C++ SDK快速开始(点开查看详情)
- + #### 安装 @@ -211,7 +211,7 @@ int main(int argc, char* argv[]) { |:----------------------:|:--------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------:|:-------:|:----------:|:-------:|:----------:|:-------:|:-------:|:-----------:|:-------------:|:-------------:|:-------:| | --- | --- | --- | X86 CPU | NVIDIA GPU | X86 CPU | NVIDIA GPU | X86 CPU | Arm CPU | AArch64 CPU | NVIDIA Jetson | Graphcore IPU | Serving | | Classification | [PaddleClas/ResNet50](./examples/vision/classification/paddleclas) | [Python](./examples/vision/classification/paddleclas/python)/[C++](./examples/vision/classification/paddleclas/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | -| Classification | [TorchVison/ResNet](examples/vision/classification/resnet) | [Python](./examples/vision/classification/resnet/python)/[C++](./examples/vision/classification/resnet/python/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | +| Classification | [TorchVison/ResNet](examples/vision/classification/resnet) | [Python](./examples/vision/classification/resnet/python)/[C++](./examples/vision/classification/resnet/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | Classification | [ltralytics/YOLOv5Cls](examples/vision/classification/yolov5cls) | [Python](./examples/vision/classification/yolov5cls/python)/[C++](./examples/vision/classification/yolov5cls/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | Classification | [PaddleClas/PP-LCNet](./examples/vision/classification/paddleclas) | [Python](./examples/vision/classification/paddleclas/python)/[C++](./examples/vision/classification/paddleclas/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | | Classification | [PaddleClas/PP-LCNetv2](./examples/vision/classification/paddleclas) | [Python](./examples/vision/classification/paddleclas/python)/[C++](./examples/vision/classification/paddleclas/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | @@ -243,9 +243,9 @@ int main(int argc, char* argv[]) { | Detection | [WongKinYiu/ScaledYOLOv4](./examples/vision/detection/scaledyolov4) | [Python](./examples/vision/detection/scaledyolov4/python)/[C++](./examples/vision/detection/scaledyolov4/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | Detection | [ppogg/YOLOv5Lite](./examples/vision/detection/yolov5lite) | [Python](./examples/vision/detection/yolov5lite/python)/[C++](./examples/vision/detection/yolov5lite/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | Detection | [RangiLyu/NanoDetPlus](./examples/vision/detection/nanodet_plus) | [Python](./examples/vision/detection/nanodet_plus/python)/[C++](./examples/vision/detection/nanodet_plus/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | -| KeyPoint | [PaddleDetection/TinyPose](./examples/vision/keypointdetection/tiny_pose) | [Python](./examples/vision/keypointdetection/tiny_pose/python)/[C++](./examples/vision/keypointdetection/tiny_pose/python/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | +| KeyPoint | [PaddleDetection/TinyPose](./examples/vision/keypointdetection/tiny_pose) | [Python](./examples/vision/keypointdetection/tiny_pose/python)/[C++](./examples/vision/keypointdetection/tiny_pose/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | KeyPoint | [PaddleDetection/PicoDet + TinyPose](./examples/vision/keypointdetection/det_keypoint_unite) | [Python](./examples/vision/keypointdetection/det_keypoint_unite/python)/[C++](./examples/vision/keypointdetection/det_keypoint_unite/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | -| HeadPose | [omasaht/headpose](examples/vision/headpose) | [Python](./xamples/vision/headpose/fsanet/python)/[C++](./xamples/vision/headpose/fsanet/cpp/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | +| HeadPose | [omasaht/headpose](examples/vision/headpose) | [Python](./examples/vision/headpose/fsanet/python)/[C++](./examples/vision/headpose/fsanet/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | Tracking | [PaddleDetection/PP-Tracking](examples/vision/tracking/pptracking) | [Python](examples/vision/tracking/pptracking/python)/[C++](examples/vision/tracking/pptracking/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | OCR | [PaddleOCR/PP-OCRv2](./examples/vision/ocr) | [Python](./examples/vision/detection/nanodet_plus/python)/[C++](./examples/vision/ocr/PP-OCRv3/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | OCR | [PaddleOCR/PP-OCRv3](./examples/vision/ocr) | [Python](./examples/vision/ocr/PP-OCRv3/python)/[C++](./examples/vision/ocr/PP-OCRv3/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | @@ -331,12 +331,12 @@ int main(int argc, char* argv[]) { | OCR | [PaddleOCR/PP-OCRv2](examples/vision/ocr/PP-OCRv2) | 2.3+4.4 | ✅ | ❔ | ❔ | ❔ | -- | -- | -- | -- | | OCR | [PaddleOCR/PP-OCRv3](examples/vision/ocr/PP-OCRv3) | 2.4+10.6 | ✅ | ❔ | ❔ | ❔ | ❔ | ❔ | ❔ | -- | | OCR | PaddleOCR/PP-OCRv3-tiny | 2.4+10.7 | ❔ | ❔ | ❔ | ❔ | -- | -- | -- | -- | - - + + ## 🌐 Web和小程序部署
- + | 任务场景 | 模型 | [web_demo](examples/application/js/web_demo) | |:------------------:|:-------------------------------------------------------------------------------------------:|:--------------------------------------------:| | --- | --- | [Paddle.js](examples/application/js) | @@ -346,7 +346,7 @@ int main(int argc, char* argv[]) { | Object Recognition | [GestureRecognition](examples/application/js/web_demo/src/pages/cv/recognition) | ✅ | | Object Recognition | [ItemIdentification](examples/application/js/web_demo/src/pages/cv/recognition) | ✅ | | OCR | [PaddleOCR/PP-OCRv3](./examples/application/js/web_demo/src/pages/cv/ocr) | ✅ | - +
## 社区交流 diff --git a/README_EN.md b/README_EN.md index b89e7f125e..b439b2c1b8 100644 --- a/README_EN.md +++ b/README_EN.md @@ -38,7 +38,7 @@ Including image classification, object detection, image segmentation, face detec
- + - 🔥 **2022.11.8:Release FastDeploy [release v0.6.0](https://github.com/PaddlePaddle/FastDeploy/tree/release/0.6.0)**
- **🖥️ Server-side and Cloud Deployment: Support more backend, Support more CV models** - Optimize preprocessing and postprocessing memory creation logic on YOLO series, PaddleClas, PaddleDetection; @@ -54,7 +54,7 @@ Including image classification, object detection, image segmentation, face detec ## Contents *
📖 Tutorials(click to fold)
- + - Install - [How to Install FastDeploy Prebuilt Libraries](docs/en/build_and_install/download_prebuilt_libraries.md) - [How to Build and Install FastDeploy Library on GPU Platform](docs/en/build_and_install/gpu.md) @@ -158,7 +158,7 @@ vis_im = vision.vis_detection(im, result, score_threshold=0.5) cv2.imwrite("vis_image.jpg", vis_im) ```
- +
@@ -213,13 +213,13 @@ Notes: ✅: already supported; ❔: to be supported in the future; N/A: Not Ava
- + | Task | Model | API | Linux | Linux | Win | Win | Mac | Mac | Linux | Linux | Linux | Linux | |:-----------------------------:|:---------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------------------:|:---------------------:|:------------------------:|:------------------------:|:------------------------:|:-----------------------:|:---------------------:|:--------------------------:|:---------------------------:|:--------------------------:|:---------------------------:| | --- | --- | --- | X86 CPU | NVIDIA GPU | Intel CPU | NVIDIA GPU | Intel CPU | Arm CPU | AArch64 CPU | NVIDIA Jetson | Graphcore IPU | Serving| | Classification | [PaddleClas/ResNet50](./examples/vision/classification/paddleclas) | [Python](./examples/vision/classification/paddleclas/python)/[C++](./examples/vision/classification/paddleclas/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | -| Classification | [TorchVison/ResNet](examples/vision/classification/resnet) | [Python](./examples/vision/classification/resnet/python)/[C++](./examples/vision/classification/resnet/python/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | +| Classification | [TorchVison/ResNet](examples/vision/classification/resnet) | [Python](./examples/vision/classification/resnet/python)/[C++](./examples/vision/classification/resnet/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | Classification | [ltralytics/YOLOv5Cls](examples/vision/classification/yolov5cls) | [Python](./examples/vision/classification/yolov5cls/python)/[C++](./examples/vision/classification/yolov5cls/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | Classification | [PaddleClas/PP-LCNet](./examples/vision/classification/paddleclas) | [Python](./examples/vision/classification/paddleclas/python)/[C++](./examples/vision/classification/paddleclas/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | | Classification | [PaddleClas/PP-LCNetv2](./examples/vision/classification/paddleclas) | [Python](./examples/vision/classification/paddleclas/python)/[C++](./examples/vision/classification/paddleclas/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | @@ -251,9 +251,9 @@ Notes: ✅: already supported; ❔: to be supported in the future; N/A: Not Ava | Detection | [WongKinYiu/ScaledYOLOv4](./examples/vision/detection/scaledyolov4) | [Python](./examples/vision/detection/scaledyolov4/python)/[C++](./examples/vision/detection/scaledyolov4/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | Detection | [ppogg/YOLOv5Lite](./examples/vision/detection/yolov5lite) | [Python](./examples/vision/detection/yolov5lite/python)/[C++](./examples/vision/detection/yolov5lite/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | Detection | [RangiLyu/NanoDetPlus](./examples/vision/detection/nanodet_plus) | [Python](./examples/vision/detection/nanodet_plus/python)/[C++](./examples/vision/detection/nanodet_plus/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | -| KeyPoint | [PaddleDetection/TinyPose](./examples/vision/keypointdetection/tiny_pose) | [Python](./examples/vision/keypointdetection/tiny_pose/python)/[C++](./examples/vision/keypointdetection/tiny_pose/python/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | +| KeyPoint | [PaddleDetection/TinyPose](./examples/vision/keypointdetection/tiny_pose) | [Python](./examples/vision/keypointdetection/tiny_pose/python)/[C++](./examples/vision/keypointdetection/tiny_pose/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | KeyPoint | [PaddleDetection/PicoDet + TinyPose](./examples/vision/keypointdetection/det_keypoint_unite) | [Python](./examples/vision/keypointdetection/det_keypoint_unite/python)/[C++](./examples/vision/keypointdetection/det_keypoint_unite/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | -| HeadPose | [omasaht/headpose](examples/vision/headpose) | [Python](./xamples/vision/headpose/fsanet/python)/[C++](./xamples/vision/headpose/fsanet/cpp/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | +| HeadPose | [omasaht/headpose](examples/vision/headpose) | [Python](./examples/vision/headpose/fsanet/python)/[C++](./examples/vision/headpose/fsanet/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | Tracking | [PaddleDetection/PP-Tracking](examples/vision/tracking/pptracking) | [Python](examples/vision/tracking/pptracking/python)/[C++](examples/vision/tracking/pptracking/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | OCR | [PaddleOCR/PP-OCRv2](./examples/vision/ocr) | [Python](./examples/vision/detection/nanodet_plus/python)/[C++](./examples/vision/ocr/PP-OCRv3/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | OCR | [PaddleOCR/PP-OCRv3](./examples/vision/ocr) | [Python](./examples/vision/ocr/PP-OCRv3/python)/[C++](./examples/vision/ocr/PP-OCRv3/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | @@ -280,21 +280,21 @@ Notes: ✅: already supported; ❔: to be supported in the future; N/A: Not Ava | Information Extraction | [PaddleNLP/UIE](./examples/text/uie) | [Python](./examples/text/uie/python)/[C++](./examples/text/uie/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | NLP | [PaddleNLP/ERNIE-3.0](./examples/text/ernie-3.0) | Python/C++ | ❔ | ❔ | ❔ | ❔ | ❔ | ❔ | ❔ | ❔ | ❔ | ✅ | | Speech | [PaddleSpeech/PP-TTS](./examples/text/uie) | [Python](examples/audio/pp-tts/python)/C++ | ❔ | ❔ | ❔ | ❔ | ❔ | ❔ | ❔ | ❔ | -- | ✅ | - +
- + ## 📱 Mobile and Edge Device Deployment
- + ### Paddle Lite NPU Deployment - [Rockchip-NPU / Amlogic-NPU / NXP-NPU](./examples/vision/detection/paddledetection/rk1126)
- + ### Mobile and Edge Model List 🔥🔥🔥🔥
@@ -340,11 +340,11 @@ Notes: ✅: already supported; ❔: to be supported in the future; N/A: Not Ava | OCR | [PaddleOCR/PP-OCRv3](examples/vision/ocr/PP-OCRv3) | 2.4+10.6 | ✅ | ❔ | ❔ | ❔ | ❔ | ❔ | ❔ | -- | | OCR | PaddleOCR/PP-OCRv3-tiny | 2.4+10.7 | ❔ | ❔ | ❔ | ❔ | -- | -- | -- | -- | - + ## 🌐 Browser-based Model List
- + | Task | Model | [web_demo](examples/application/js/web_demo) | |:------------------:|:-------------------------------------------------------------------------------------------:|:--------------------------------------------:| | --- | --- | [Paddle.js](examples/application/js) | @@ -355,7 +355,7 @@ Notes: ✅: already supported; ❔: to be supported in the future; N/A: Not Ava | Object Recognition | [ItemIdentification](examples/application/js/web_demo/src/pages/cv/recognition) | ✅ | | OCR | [PaddleOCR/PP-OCRv3](./examples/application/js/web_demo/src/pages/cv/ocr) | ✅ | - + ## Community
diff --git a/docs/api_docs/cpp/main_page.md b/docs/api_docs/cpp/main_page.md index 017ad7b60d..585e6d4286 100644 --- a/docs/api_docs/cpp/main_page.md +++ b/docs/api_docs/cpp/main_page.md @@ -18,14 +18,14 @@ Currently, FastDeploy supported backends listed as below, - [C++ examples](./) ### Related APIs -- [RuntimeOption](./structfastdeploy_1_1RuntimeOption.html) -- [Runtime](./structfastdeploy_1_1Runtime.html) +- [RuntimeOption](https://baidu-paddle.github.io/fastdeploy-api/cpp/html/structfastdeploy_1_1RuntimeOption.html) +- [Runtime](https://baidu-paddle.github.io/fastdeploy-api/cpp/html/structfastdeploy_1_1Runtime.html) ## Vision Models | Task | Model | API | Example | | :---- | :---- | :---- | :----- | -| object detection | PaddleDetection/PPYOLOE | [fastdeploy::vision::detection::PPYOLOE](./classfastdeploy_1_1vision_1_1detection_1_1PPYOLOE.html) | [C++](./)/[Python](./) | -| keypoint detection | PaddleDetection/PPTinyPose | [fastdeploy::vision::keypointdetection::PPTinyPose](./classfastdeploy_1_1vision_1_1keypointdetection_1_1PPTinyPose.html) | [C++](./)/[Python](./) | -| image classification | PaddleClassification serials | [fastdeploy::vision::classification::PaddleClasModel](./classfastdeploy_1_1vision_1_1classification_1_1PaddleClasModel.html) | [C++](./)/[Python](./) | -| semantic segmentation | PaddleSegmentation serials | [fastdeploy::vision::classification::PaddleSegModel](./classfastdeploy_1_1vision_1_1segmentation_1_1PaddleSegModel.html) | [C++](./)/[Python](./) | +| object detection | PaddleDetection/PPYOLOE | [fastdeploy::vision::detection::PPYOLOE](https://baidu-paddle.github.io/fastdeploy-api/cpp/html/classfastdeploy_1_1vision_1_1detection_1_1PPYOLOE.html) | [C++](./)/[Python](./) | +| keypoint detection | PaddleDetection/PPTinyPose | [fastdeploy::vision::keypointdetection::PPTinyPose](https://baidu-paddle.github.io/fastdeploy-api/cpp/html/classfastdeploy_1_1pipeline_1_1PPTinyPose.html) | [C++](./)/[Python](./) | +| image classification | PaddleClassification serials | [fastdeploy::vision::classification::PaddleClasModel](https://baidu-paddle.github.io/fastdeploy-api/cpp/html/classfastdeploy_1_1vision_1_1classification_1_1PaddleClasModel.html) | [C++](./)/[Python](./) | +| semantic segmentation | PaddleSegmentation serials | [fastdeploy::vision::classification::PaddleSegModel](https://baidu-paddle.github.io/fastdeploy-api/cpp/html/classfastdeploy_1_1vision_1_1segmentation_1_1PaddleSegModel.html) | [C++](./)/[Python](./) | diff --git a/examples/runtime/cpp/README.md b/examples/runtime/cpp/README.md index 9de8b1d627..38d25041dd 100644 --- a/examples/runtime/cpp/README.md +++ b/examples/runtime/cpp/README.md @@ -2,8 +2,8 @@ 在运行demo前,需确认以下两个步骤 -- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md) -- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md) +- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../docs/cn/build_and_install/download_prebuilt_libraries.md) +- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../docs/cn/build_and_install/download_prebuilt_libraries.md) 本文档以 PaddleClas 分类模型 MobileNetV2 为例展示CPU上的推理示例 @@ -113,9 +113,9 @@ make -j source /Path/to/fastdeploy_cpp_sdk/fastdeploy_init.sh ``` -本示例代码在各平台(Windows/Linux/Mac)上通用,但编译过程仅支持(Linux/Mac),Windows上使用msbuild进行编译,具体使用方式参考[Windows平台使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md) +本示例代码在各平台(Windows/Linux/Mac)上通用,但编译过程仅支持(Linux/Mac),Windows上使用msbuild进行编译,具体使用方式参考[Windows平台使用FastDeploy C++ SDK](../../../docs/cn/faq/use_sdk_on_windows.md) ## 其它文档 - [Runtime Python 示例](../python) -- [切换模型推理的硬件和后端](../../../../../docs/cn/faq/how_to_change_backend.md) +- [切换模型推理的硬件和后端](../../../docs/cn/faq/how_to_change_backend.md) diff --git a/examples/runtime/python/README.md b/examples/runtime/python/README.md index c9692fca6b..42f0070518 100644 --- a/examples/runtime/python/README.md +++ b/examples/runtime/python/README.md @@ -2,8 +2,8 @@ 在运行demo前,需确认以下两个步骤 -- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md) -- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md) +- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../docs/cn/build_and_install/download_prebuilt_libraries.md) +- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../docs/cn/build_and_install/download_prebuilt_libraries.md) 本文档以 PaddleClas 分类模型 MobileNetV2 为例展示 CPU 上的推理示例 @@ -50,4 +50,4 @@ print(results[0].shape) ## 其它文档 - [Runtime C++ 示例](../cpp) -- [切换模型推理的硬件和后端](../../../../../docs/cn/faq/how_to_change_backend.md) +- [切换模型推理的硬件和后端](../../../docs/cn/faq/how_to_change_backend.md) diff --git a/examples/text/uie/cpp/README.md b/examples/text/uie/cpp/README.md index a3b8f801b1..c4fcc7f4f5 100644 --- a/examples/text/uie/cpp/README.md +++ b/examples/text/uie/cpp/README.md @@ -466,8 +466,7 @@ void Predict( **参数** > * **texts**(list(str)): 文本列表 -> * **results**(list(dict())): UIE模型抽取结果。UIEResult结构详细可见[UIEResult说明](../../../../docs/api/text_results/uie_result.md)。 - +> * **results**(list(dict())): UIE模型抽取结果。 ## 相关文档 [UIE模型详细介绍](https://github.com/PaddlePaddle/PaddleNLP/blob/develop/model_zoo/uie/README.md) diff --git a/examples/text/uie/python/README.md b/examples/text/uie/python/README.md index 099820dbb2..ca855c762b 100644 --- a/examples/text/uie/python/README.md +++ b/examples/text/uie/python/README.md @@ -375,7 +375,7 @@ UIEModel模型加载和初始化,其中`model_file`, `params_file`为训练模 > > * **return_dict**(bool): 是否以字典形式输出UIE结果,默认为False。 > **返回** > -> > 返回`dict(str, list(fastdeploy.text.C.UIEResult))`, 详细可见[UIEResult说明](../../../../docs/api/text_results/uie_result.md)。 +> > 返回`dict(str, list(fastdeploy.text.C.UIEResult))`。 ## 相关文档 diff --git a/examples/vision/detection/paddledetection/rk1126/picodet_detection/README.md b/examples/vision/detection/paddledetection/rk1126/picodet_detection/README.md index 735ff1976e..ddebdeb7ad 100755 --- a/examples/vision/detection/paddledetection/rk1126/picodet_detection/README.md +++ b/examples/vision/detection/paddledetection/rk1126/picodet_detection/README.md @@ -118,7 +118,7 @@ Paddle-Lite-Demo/object_detection/linux/picodet_detection/run.sh ## 代码讲解 (使用 Paddle Lite `C++ API` 执行预测) -ARMLinux 示例基于 C++ API 开发,调用 Paddle Lite `C++s API` 包括以下五步。更详细的 `API` 描述参考:[Paddle Lite C++ API ](https://paddle-lite.readthedocs.io/zh/latest/api_reference/c++_api_doc.html)。 +ARMLinux 示例基于 C++ API 开发,调用 Paddle Lite `C++s API` 包括以下五步。更详细的 `API` 描述参考:[Paddle Lite C++ API ](https://paddle-lite.readthedocs.io/zh/latest/api_reference/cxx_api_doc.html)。 ```c++ #include @@ -198,7 +198,7 @@ export LD_LIBRARY_PATH=../Paddle-Lite/libs/$TARGET_ABI/ export GLOG_v=0 # Paddle-Lite 日志等级 export VSI_NN_LOG_LEVEL=0 # TIM-VX 日志等级 export VIV_VX_ENABLE_GRAPH_TRANSFORM=-pcq:1 # NPU 开启 perchannel 量化模型 -export VIV_VX_SET_PER_CHANNEL_ENTROPY=100 # 同上 +export VIV_VX_SET_PER_CHANNEL_ENTROPY=100 # 同上 build/object_detection_demo models/picodetv2_relu6_coco_no_fuse ../../assets/labels/coco_label_list.txt models/picodetv2_relu6_coco_no_fuse/subgraph.txt models/picodetv2_relu6_coco_no_fuse/picodet.yml # 执行 Demo 程序,4个 arg 分别为:模型、 label 文件、 自定义异构配置、 yaml ``` @@ -206,7 +206,7 @@ build/object_detection_demo models/picodetv2_relu6_coco_no_fuse ../../assets/lab ```shell # 代码文件 `object_detection_demo/rush.sh` -export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${PADDLE_LITE_DIR}/libs/${TARGET_ARCH_ABI} +export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${PADDLE_LITE_DIR}/libs/${TARGET_ARCH_ABI} build/object_detection_demo {模型} {label} {自定义异构配置文件} {yaml} ``` diff --git a/examples/vision/detection/yolov5/quantize/README_EN.md b/examples/vision/detection/yolov5/quantize/README_EN.md index c439470adc..c704ccce2e 100644 --- a/examples/vision/detection/yolov5/quantize/README_EN.md +++ b/examples/vision/detection/yolov5/quantize/README_EN.md @@ -6,7 +6,7 @@ Users can use the one-click model quantization tool to quantize and deploy the m ## FastDeploy One-Click Model Quantization Tool FastDeploy provides a one-click quantization tool that allows users to quantize a model simply with a configuration file. -For a detailed tutorial, please refer to: [One-Click Model Quantization Tool](... /... /... /... /... /... /... /tools/quantization/) +For a detailed tutorial, please refer to: [One-Click Model Quantization Tool](../../../../../tools/auto_compression/) ## Download Quantized YOLOv5s Model diff --git a/examples/vision/detection/yolov6/quantize/README_EN.md b/examples/vision/detection/yolov6/quantize/README_EN.md index 5fd3082bcc..b7ae61d115 100644 --- a/examples/vision/detection/yolov6/quantize/README_EN.md +++ b/examples/vision/detection/yolov6/quantize/README_EN.md @@ -6,7 +6,7 @@ Users can use the one-click model quantization tool to quantize and deploy the m ## FastDeploy One-Click Model Quantization Tool FastDeploy provides a one-click quantization tool that allows users to quantize a model simply with a configuration file. -For detailed tutorial, please refer to : [One-Click Model Quantization Tool](... /... /... /... /... /... /... /tools/quantization/) +For detailed tutorial, please refer to : [One-Click Model Quantization Tool](../../../../../tools/auto_compression/) ## Download Quantized YOLOv6s Model diff --git a/examples/vision/detection/yolov7/quantize/README_EN.md b/examples/vision/detection/yolov7/quantize/README_EN.md index 039000d9e9..4e6b2e3533 100644 --- a/examples/vision/detection/yolov7/quantize/README_EN.md +++ b/examples/vision/detection/yolov7/quantize/README_EN.md @@ -6,7 +6,7 @@ Users can use the one-click model quantization tool to quantize and deploy the m ## FastDeploy One-Click Model Quantization Tool FastDeploy provides a one-click quantization tool that allows users to quantize a model simply with a configuration file. -For detailed tutorial, please refer to : [One-Click Model Quantization Tool](... /... /... /... /... /... /... /tools/quantization/) +For detailed tutorial, please refer to : [One-Click Model Quantization Tool](../../../../../tools/auto_compression/) ## Download Quantized YOLOv7 Model diff --git a/examples/vision/ocr/PP-OCRv3/web/README.md b/examples/vision/ocr/PP-OCRv3/web/README.md index 3afd247612..b13e7547d5 100644 --- a/examples/vision/ocr/PP-OCRv3/web/README.md +++ b/examples/vision/ocr/PP-OCRv3/web/README.md @@ -37,4 +37,4 @@ ocr模型加载和初始化,其中模型为Paddle.js模型格式,js模型转 - [PP-OCRv3 C++部署](../cpp) - [模型预测结果说明](../../../../../docs/api/vision_results/) - [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md) -- [PP-OCRv3 微信小程序部署文档](../../../../application/web_demo/examples/ocrXcx/) +- [PP-OCRv3 微信小程序部署文档](../mini_program/) diff --git a/serving/docs/EN/model_repository-en.md b/serving/docs/EN/model_repository-en.md index 6d8251549a..099593be44 100644 --- a/serving/docs/EN/model_repository-en.md +++ b/serving/docs/EN/model_repository-en.md @@ -1,6 +1,6 @@ # Model Repository -FastDeploy starts the serving by specifying one or more models in the model repository to deploy the service. When the serving is running, the models in the service can be modified following [Model Management](https://github.com/triton-inference-server/server/blob/main/docs/model_management.md), and obtain serving from one or more model repositories specified at the serving initiation. +FastDeploy starts the serving by specifying one or more models in the model repository to deploy the service. When the serving is running, the models in the service can be modified following [Model Management](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_management.md), and obtain serving from one or more model repositories specified at the serving initiation. ## Repository Architecture @@ -39,7 +39,7 @@ Paddle models are saved in the version number subdirectory, which must be `model ## Model Version -Each model can have one or more versions available in the repository. The subdirectory named with a number in the model directory implies the version number. Subdirectories that are not named with a number, or that start with *0* will be ignored. A [version policy](https://github.com/triton-inference-server/server/blob/main/docs/model_configuration.md#version-policy) can be specified in the model configuration file to control which version of the model in model directory is launched by Triton. +Each model can have one or more versions available in the repository. The subdirectory named with a number in the model directory implies the version number. Subdirectories that are not named with a number, or that start with *0* will be ignored. A [version policy](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_configuration.md#version-policy) can be specified in the model configuration file to control which version of the model in model directory is launched by Triton. ## Repository Demo diff --git a/serving/docs/zh_CN/model_configuration.md b/serving/docs/zh_CN/model_configuration.md index ce3abc0759..6ae51af640 100644 --- a/serving/docs/zh_CN/model_configuration.md +++ b/serving/docs/zh_CN/model_configuration.md @@ -2,7 +2,7 @@ 模型存储库中的每个模型都必须包含一个模型配置,该配置提供了关于模型的必要和可选信息。这些配置信息一般写在 *config.pbtxt* 文件中,[ModelConfig protobuf](https://github.com/triton-inference-server/common/blob/main/protobuf/model_config.proto)格式。 ## 模型通用最小配置 -详细的模型通用配置请看官网文档: [model_configuration](https://github.com/triton-inference-server/server/blob/main/docs/model_configuration.md).Triton的最小模型配置必须包括: *platform* 或 *backend* 属性、*max_batch_size* 属性和模型的输入输出. +详细的模型通用配置请看官网文档: [model_configuration](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_configuration.md).Triton的最小模型配置必须包括: *platform* 或 *backend* 属性、*max_batch_size* 属性和模型的输入输出. 例如一个Paddle模型,有两个输入*input0* 和 *input1*,一个输出*output0*,输入输出都是float32类型的tensor,最大batch为8.则最小的配置如下: diff --git a/serving/docs/zh_CN/model_repository.md b/serving/docs/zh_CN/model_repository.md index adff771ffa..bc46cb3522 100644 --- a/serving/docs/zh_CN/model_repository.md +++ b/serving/docs/zh_CN/model_repository.md @@ -1,6 +1,6 @@ # 模型仓库(Model Repository) -FastDeploy启动服务时指定模型仓库中一个或多个模型部署服务。当服务运行时,可以用[Model Management](https://github.com/triton-inference-server/server/blob/main/docs/model_management.md)中描述的方式修改服务中的模型。 +FastDeploy启动服务时指定模型仓库中一个或多个模型部署服务。当服务运行时,可以用[Model Management](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_management.md)中描述的方式修改服务中的模型。 从服务器启动时指定的一个或多个模型存储库中为模型提供服务 ## 仓库结构 @@ -36,7 +36,7 @@ $ fastdeploy --model-repository= Paddle模型存在版本号子目录中,必须为`model.pdmodel`文件和`model.pdiparams`文件。 ## 模型版本 -每个模型在仓库中可以有一个或多个可用的版本,模型目录中以数字命名的子目录就是对应的版本,数字即版本号。没有以数字命名的子目录,或以*0*开头的子目录都会被忽略。模型配置文件中可以指定[版本策略](https://github.com/triton-inference-server/server/blob/main/docs/model_configuration.md#version-policy),控制Triton启动模型目录中的哪个版本。 +每个模型在仓库中可以有一个或多个可用的版本,模型目录中以数字命名的子目录就是对应的版本,数字即版本号。没有以数字命名的子目录,或以*0*开头的子目录都会被忽略。模型配置文件中可以指定[版本策略](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_configuration.md#version-policy),控制Triton启动模型目录中的哪个版本。 ## 模型仓库示例 部署Paddle模型时需要的模型必须是2.0版本以上导出的推理模型,模型包含`model.pdmodel`和`model.pdiparams`两个文件放在版本目录中。