-
Notifications
You must be signed in to change notification settings - Fork 636
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update: Optimize document #484
Conversation
- Minor fixes in styling and grammar - Add support for Jetson Xavier NX (Tested and worked) - Add hardware recommendation - Change JetPack installation guide URL from jp5.0 to jp4.6.1 - Add a note to select "Jetson SDK Components" when using NVIDIA SDK Manager - Change PyTorch wheel save location - Add more dependencies needed for torchvision installation. Otherwise installation error - Simplify torchvision git cloning branch - Add installation times for torchvision, MMCV, versioned-hdf5, ppl.cv, model converter, SDK libraries - Delete "snap" from cmake removal as "apt-get purge" is enough - Add a note on which scenarios you need to append cu da path and libraries to PATH and LD_LIBRARY_PATH - Simplify MMCV git cloning branch - Delete "skip if you don't need MMDeploy C/C++ Inference SDK", because that is the only available inference SDK at the moment - Add more details to object detection demo using C/C++ Inference SDK such as installing MMDetection and converting a model - Add image of inference result - Delete "set env for pip" in troubleshooting because this is already mentioned under "installing Archiconda" Signed-off-by: Lakshantha Dissanayake <lakshanthad@seeed.cc>
Hi, @lakshanthad please fix the lint first following doc. |
You can find a very detailed installation guide from NVIDIA [official website](https://developer.nvidia.com/jetpack-sdk-50dp). | ||
You can find a very detailed installation guide from NVIDIA [official website](https://developer.nvidia.com/jetpack-sdk-461). | ||
|
||
**Note:** Please select the option to install "Jetson SDK Components" when using NVIDIA SDK Manager as this includes CUDA and TensorRT which are needed for this guide. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We recommend the following note style because it can be rendered prettier on the readthedocs.
three backticks(`) {note} three backticks
You can find the rendering at https://mmdeploy.readthedocs.io/en/latest/tutorials/how_to_install_mmdeploy_on_jetsons.html
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is fixed
Is there anything more to fix? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
* Update: Optimize document - Minor fixes in styling and grammar - Add support for Jetson Xavier NX (Tested and worked) - Add hardware recommendation - Change JetPack installation guide URL from jp5.0 to jp4.6.1 - Add a note to select "Jetson SDK Components" when using NVIDIA SDK Manager - Change PyTorch wheel save location - Add more dependencies needed for torchvision installation. Otherwise installation error - Simplify torchvision git cloning branch - Add installation times for torchvision, MMCV, versioned-hdf5, ppl.cv, model converter, SDK libraries - Delete "snap" from cmake removal as "apt-get purge" is enough - Add a note on which scenarios you need to append cu da path and libraries to PATH and LD_LIBRARY_PATH - Simplify MMCV git cloning branch - Delete "skip if you don't need MMDeploy C/C++ Inference SDK", because that is the only available inference SDK at the moment - Add more details to object detection demo using C/C++ Inference SDK such as installing MMDetection and converting a model - Add image of inference result - Delete "set env for pip" in troubleshooting because this is already mentioned under "installing Archiconda" Signed-off-by: Lakshantha Dissanayake <lakshanthad@seeed.cc> * Fix: note style on doc * Fix: Trim trailing whitespaces * Update: add source image before inference (cherry picked from commit 69111a6)
* fix pose demo and windows build (#307) * add postprocessing_masks gpu version (#276) * add postprocessing_masks gpu version * default device cpu * pre-commit fix Co-authored-by: hadoop-basecv <hadoop-basecv@set-gh-basecv-serving-classify11.mt> * fixed a bug causes text-recognizer to fail when (non-NULL) empty bboxes list is passed (#310) * [Fix] include missing <type_traits> for formatter.h (#313) * fix formatter * relax GCC version requirement * [Fix] MMEditing cannot save results when testing (#336) * fix show * lint * remove redundant codes * resolve comment * type hint * docs(build): fix typo (#352) * docs(build): add missing build option * docs(build): add onnx install * style(doc): trim whitespace * docs(build): revert install onnx * docs(build): add ncnn LD_LIBRARY_PATH * docs(build): fix path error * fix openvino export tmp model, add binary flag (#353) * init circleci (#348) * fix wrong input mat type (#362) * fix wrong input mat type * fix lint * fix(docs): remove redundant doc tree (#360) * fix missing ncnn_DIR & InferenceEngine_DIR (#364) * Fix mmdet openvino dynamic 300x300 cfg base (#372) * Fix: add onnxruntime building option in gpu dockerfile (#366) * Tutorial 03: torch2onnx (#365) * upload doc * add images * resolve comments * update translation * [Docs] fix ncnn docs (#378) * fix ncnn docs` * update 0216 * typo-fix (#397) * add CUDA_TOOKIT_ROOT_DIR as tensorrt detect dir (#357) * add CUDA_TOOKIT_ROOT_DIR as tensorrt detect dir * Update FindTENSORRT.cmake * Fix docs (#398) * ort_net ONNX_TENSOR_ELEMENT_DATA_TYPE_BOOL (#383) * fix wrong buffer which will case onnxruntime-gpu crash with segmentaion (#363) * fix wrong buffer which will case onnxruntime-gpu crash with segmentaion * fix check * fix build error * remove unused header * fix benchmark (#411) * Add `sm_53` in cuda.cmake for Jetson Nano which will cashe when process sdk predict. (#407) * [Fix] fix feature test for `std::source_location` (#416) * fix feature test for `std::source_location` * suppress msvc warnings * fix consistency * fix format string (#417) * [Fix] Fix seg name (#394) * fix seg name * use default name Co-authored-by: dongchunyu.vendor <dongchunyu@pjlab.org.cn> * 【Docs】Add ipython notebook tutorial (#234) * add ipynb file * rename file * add open in colab tag * fix lint and add img show * fix open in colab link * fix comments * fix pre-commit config * fix mmpose api (#396) * fix mmpose api * use fmt::format instead * fix potential nullptr access * [Fix] support latest spdlog (#423) * support formatting `PixelFormat` & `DataType` * format enum for legacy spdlog * fix format * fix pillarencode (#331) * fix ONNXRuntime cuda test bug (#438) * Fix ci in master branch (#441) * [Doc] Improve Jetson tutorial install doc (#381) * Improve Jetson build doc * add torchvision in the doc * Fix lint * Fix lint * Fix lint * Fix arg bug * remove incorrect process * Improve doc * Add more detail on `Conda` * Add python version detail * Install `onnx` instead of `onnxruntime` * Fix gramma * Fix gramma * Update Installation detail and fix some doc detail * Update how_to_install_mmdeploy_on_jetsons.md * Fix tensorrt and cudnn path * Improve FAQ * Improve FAQs * pplcv not switch branch since the `sm_53` missing * Update how_to_install_mmdeploy_on_jetsons.md * Update how_to_install_mmdeploy_on_jetsons.md * Update how_to_install_mmdeploy_on_jetsons.md * Update how_to_install_mmdeploy_on_jetsons.md * Improve doc * Update how_to_install_mmdeploy_on_jetsons.md * export `TENSORRT_DIR` * Using pre-build cmake to update * Improve sentence and add jetpack version * Improve sentence * move TENSORRT_DIR in the `Make TensorRT env` step * Improve CUDA detail * Update how_to_install_mmdeploy_on_jetsons.md * Update how_to_install_mmdeploy_on_jetsons.md * Improve conda installation * Improve TensorRT installation * Fix lint * Add pip crash detail and FAQ * Improve pip crash * refine the jetson installation guide * Improve python version * Improve doc, added some detail * Fix lint * Add detail for `Runtime` problem * Fix word * Update how_to_install_mmdeploy_on_jetsons.md Co-authored-by: lvhan028 <lvhan_028@163.com> * Version comments added, torch install steps added. (#449) * [Docs] Fix API documentation (#443) * [Docs] Fix API documentation * add onnx dependency in readthedocs.txt * fix dependencies * [Fix] Fix display bugs for windows (#451) * fix issue 330 for windows * fix code * fix lint * fix all platform * [Docs] Minor fixes and translation of installation tutorial for Jetson (#415) * minor fixes * add Jetson installation * updated zh_cn based on new en version * If a cuda launch error occurs, verify if cuda device requires top_k t… (#479) * If a cuda launch error occurs, verify if cuda device requires top_k to be reduced. * Fixed lint * Clang format * Fixed lint, clang-format * [Fix] set optional arg a default value (#483) * optional default value * resolve comments Co-authored-by: dongchunyu.vendor <dongchunyu@pjlab.org.cn> * Update: Optimize document (#484) * Update: Optimize document - Minor fixes in styling and grammar - Add support for Jetson Xavier NX (Tested and worked) - Add hardware recommendation - Change JetPack installation guide URL from jp5.0 to jp4.6.1 - Add a note to select "Jetson SDK Components" when using NVIDIA SDK Manager - Change PyTorch wheel save location - Add more dependencies needed for torchvision installation. Otherwise installation error - Simplify torchvision git cloning branch - Add installation times for torchvision, MMCV, versioned-hdf5, ppl.cv, model converter, SDK libraries - Delete "snap" from cmake removal as "apt-get purge" is enough - Add a note on which scenarios you need to append cu da path and libraries to PATH and LD_LIBRARY_PATH - Simplify MMCV git cloning branch - Delete "skip if you don't need MMDeploy C/C++ Inference SDK", because that is the only available inference SDK at the moment - Add more details to object detection demo using C/C++ Inference SDK such as installing MMDetection and converting a model - Add image of inference result - Delete "set env for pip" in troubleshooting because this is already mentioned under "installing Archiconda" Signed-off-by: Lakshantha Dissanayake <lakshanthad@seeed.cc> * Fix: note style on doc * Fix: Trim trailing whitespaces * Update: add source image before inference * fix: bbox_nms not onnxizing if batch size > 1 (#501) A typo prevents nms from onnxizing correctly if batch size is static and greater than 1. * change seperator of function marker (#499) * [docs] Fix typo in tutorial (#509) * Fix docstring format (#495) * Fix doc common * Fix bugs * Tutorial 04: onnx custom op (#508) * Add tutorial04 * lint * add image * resolve comment * fix mmseg twice resize (#480) * fix mmseg twich resize * remove comment * Fix mask test with mismatched device (#511) * align mask output to cpu device * align ncnn ssd output to torch.Tensor type * --amend * compat mmpose v0.26 (#518) * [Docs] adding new backends when using MMDeploy as a third package (#482) * update doc * refine expression * cn doc * Tutorial 05: ONNX Model Editing (#517) * tutorial 05 * Upload image * resolve comments * resolve comment * fix pspnet torchscript conversion (#538) * fix pspnet torchscript conversion * resolve comment * add IR to rewrite * changing the onnxwrapper script for gpu issue (#532) * changing the onnxwrapper script * gpu_issue * Update wrapper.py * Update wrapper.py * Update runtime.txt * Update runtime.txt * Update wrapper.py Co-authored-by: Chen Xin <xinchen.tju@gmail.com> Co-authored-by: Shengxi Li <982783556@qq.com> Co-authored-by: hadoop-basecv <hadoop-basecv@set-gh-basecv-serving-classify11.mt> Co-authored-by: lzhangzz <lzhang329@gmail.com> Co-authored-by: Yifan Zhou <singlezombie@163.com> Co-authored-by: tpoisonooo <khj.application@aliyun.com> Co-authored-by: HinGwenWoong <peterhuang0323@outlook.com> Co-authored-by: Junjie <61398820+Adenialzz@users.noreply.github.com> Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com> Co-authored-by: q.yao <streetyao@live.com> Co-authored-by: Song Lin <92794867+triple-Mu@users.noreply.github.com> Co-authored-by: zly19540609 <31341706+zly19540609@users.noreply.github.com> Co-authored-by: RunningLeon <mnsheng@yeah.net> Co-authored-by: HinGwenWoong <peterhuang0323@qq.com> Co-authored-by: AllentDan <41138331+AllentDan@users.noreply.github.com> Co-authored-by: dongchunyu.vendor <dongchunyu@pjlab.org.cn> Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com> Co-authored-by: NagatoYuki0943 <72508155+NagatoYuki0943@users.noreply.github.com> Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com> Co-authored-by: Zaida Zhou <58739961+zhouzaida@users.noreply.github.com> Co-authored-by: chaoqun <cdongae@connect.ust.hk> Co-authored-by: Lakshantha Dissanayake <lakshanthad@yahoo.com> Co-authored-by: Yifan Gu <gyf304@users.noreply.github.com> Co-authored-by: Zhiqiang Wang <zhiqwang@foxmail.com> Co-authored-by: sanjaypavo <93761297+sanjaypavo@users.noreply.github.com>
* fix pose demo and windows build (#307) * add postprocessing_masks gpu version (#276) * add postprocessing_masks gpu version * default device cpu * pre-commit fix Co-authored-by: hadoop-basecv <hadoop-basecv@set-gh-basecv-serving-classify11.mt> * fixed a bug causes text-recognizer to fail when (non-NULL) empty bboxes list is passed (#310) * [Fix] include missing <type_traits> for formatter.h (#313) * fix formatter * relax GCC version requirement * [Fix] MMEditing cannot save results when testing (#336) * fix show * lint * remove redundant codes * resolve comment * type hint * docs(build): fix typo (#352) * docs(build): add missing build option * docs(build): add onnx install * style(doc): trim whitespace * docs(build): revert install onnx * docs(build): add ncnn LD_LIBRARY_PATH * docs(build): fix path error * fix openvino export tmp model, add binary flag (#353) * init circleci (#348) * fix wrong input mat type (#362) * fix wrong input mat type * fix lint * fix(docs): remove redundant doc tree (#360) * fix missing ncnn_DIR & InferenceEngine_DIR (#364) * Fix mmdet openvino dynamic 300x300 cfg base (#372) * Fix: add onnxruntime building option in gpu dockerfile (#366) * Tutorial 03: torch2onnx (#365) * upload doc * add images * resolve comments * update translation * [Docs] fix ncnn docs (#378) * fix ncnn docs` * update 0216 * typo-fix (#397) * add CUDA_TOOKIT_ROOT_DIR as tensorrt detect dir (#357) * add CUDA_TOOKIT_ROOT_DIR as tensorrt detect dir * Update FindTENSORRT.cmake * Fix docs (#398) * ort_net ONNX_TENSOR_ELEMENT_DATA_TYPE_BOOL (#383) * fix wrong buffer which will case onnxruntime-gpu crash with segmentaion (#363) * fix wrong buffer which will case onnxruntime-gpu crash with segmentaion * fix check * fix build error * remove unused header * fix benchmark (#411) * Add `sm_53` in cuda.cmake for Jetson Nano which will cashe when process sdk predict. (#407) * [Fix] fix feature test for `std::source_location` (#416) * fix feature test for `std::source_location` * suppress msvc warnings * fix consistency * fix format string (#417) * [Fix] Fix seg name (#394) * fix seg name * use default name Co-authored-by: dongchunyu.vendor <dongchunyu@pjlab.org.cn> * 【Docs】Add ipython notebook tutorial (#234) * add ipynb file * rename file * add open in colab tag * fix lint and add img show * fix open in colab link * fix comments * fix pre-commit config * fix mmpose api (#396) * fix mmpose api * use fmt::format instead * fix potential nullptr access * [Fix] support latest spdlog (#423) * support formatting `PixelFormat` & `DataType` * format enum for legacy spdlog * fix format * fix pillarencode (#331) * fix ONNXRuntime cuda test bug (#438) * Fix ci in master branch (#441) * [Doc] Improve Jetson tutorial install doc (#381) * Improve Jetson build doc * add torchvision in the doc * Fix lint * Fix lint * Fix lint * Fix arg bug * remove incorrect process * Improve doc * Add more detail on `Conda` * Add python version detail * Install `onnx` instead of `onnxruntime` * Fix gramma * Fix gramma * Update Installation detail and fix some doc detail * Update how_to_install_mmdeploy_on_jetsons.md * Fix tensorrt and cudnn path * Improve FAQ * Improve FAQs * pplcv not switch branch since the `sm_53` missing * Update how_to_install_mmdeploy_on_jetsons.md * Update how_to_install_mmdeploy_on_jetsons.md * Update how_to_install_mmdeploy_on_jetsons.md * Update how_to_install_mmdeploy_on_jetsons.md * Improve doc * Update how_to_install_mmdeploy_on_jetsons.md * export `TENSORRT_DIR` * Using pre-build cmake to update * Improve sentence and add jetpack version * Improve sentence * move TENSORRT_DIR in the `Make TensorRT env` step * Improve CUDA detail * Update how_to_install_mmdeploy_on_jetsons.md * Update how_to_install_mmdeploy_on_jetsons.md * Improve conda installation * Improve TensorRT installation * Fix lint * Add pip crash detail and FAQ * Improve pip crash * refine the jetson installation guide * Improve python version * Improve doc, added some detail * Fix lint * Add detail for `Runtime` problem * Fix word * Update how_to_install_mmdeploy_on_jetsons.md Co-authored-by: lvhan028 <lvhan_028@163.com> * Version comments added, torch install steps added. (#449) * [Docs] Fix API documentation (#443) * [Docs] Fix API documentation * add onnx dependency in readthedocs.txt * fix dependencies * [Fix] Fix display bugs for windows (#451) * fix issue 330 for windows * fix code * fix lint * fix all platform * [Docs] Minor fixes and translation of installation tutorial for Jetson (#415) * minor fixes * add Jetson installation * updated zh_cn based on new en version * If a cuda launch error occurs, verify if cuda device requires top_k t… (#479) * If a cuda launch error occurs, verify if cuda device requires top_k to be reduced. * Fixed lint * Clang format * Fixed lint, clang-format * [Fix] set optional arg a default value (#483) * optional default value * resolve comments Co-authored-by: dongchunyu.vendor <dongchunyu@pjlab.org.cn> * Update: Optimize document (#484) * Update: Optimize document - Minor fixes in styling and grammar - Add support for Jetson Xavier NX (Tested and worked) - Add hardware recommendation - Change JetPack installation guide URL from jp5.0 to jp4.6.1 - Add a note to select "Jetson SDK Components" when using NVIDIA SDK Manager - Change PyTorch wheel save location - Add more dependencies needed for torchvision installation. Otherwise installation error - Simplify torchvision git cloning branch - Add installation times for torchvision, MMCV, versioned-hdf5, ppl.cv, model converter, SDK libraries - Delete "snap" from cmake removal as "apt-get purge" is enough - Add a note on which scenarios you need to append cu da path and libraries to PATH and LD_LIBRARY_PATH - Simplify MMCV git cloning branch - Delete "skip if you don't need MMDeploy C/C++ Inference SDK", because that is the only available inference SDK at the moment - Add more details to object detection demo using C/C++ Inference SDK such as installing MMDetection and converting a model - Add image of inference result - Delete "set env for pip" in troubleshooting because this is already mentioned under "installing Archiconda" Signed-off-by: Lakshantha Dissanayake <lakshanthad@seeed.cc> * Fix: note style on doc * Fix: Trim trailing whitespaces * Update: add source image before inference * fix: bbox_nms not onnxizing if batch size > 1 (#501) A typo prevents nms from onnxizing correctly if batch size is static and greater than 1. * change seperator of function marker (#499) * [docs] Fix typo in tutorial (#509) * Fix docstring format (#495) * Fix doc common * Fix bugs * Tutorial 04: onnx custom op (#508) * Add tutorial04 * lint * add image * resolve comment * fix mmseg twice resize (#480) * fix mmseg twich resize * remove comment * Fix mask test with mismatched device (#511) * align mask output to cpu device * align ncnn ssd output to torch.Tensor type * --amend * compat mmpose v0.26 (#518) * [Docs] adding new backends when using MMDeploy as a third package (#482) * update doc * refine expression * cn doc * Tutorial 05: ONNX Model Editing (#517) * tutorial 05 * Upload image * resolve comments * resolve comment * fix pspnet torchscript conversion (#538) * fix pspnet torchscript conversion * resolve comment * add IR to rewrite * changing the onnxwrapper script for gpu issue (#532) * changing the onnxwrapper script * gpu_issue * Update wrapper.py * Update wrapper.py * Update runtime.txt * Update runtime.txt * Update wrapper.py Co-authored-by: Chen Xin <xinchen.tju@gmail.com> Co-authored-by: Shengxi Li <982783556@qq.com> Co-authored-by: hadoop-basecv <hadoop-basecv@set-gh-basecv-serving-classify11.mt> Co-authored-by: lzhangzz <lzhang329@gmail.com> Co-authored-by: Yifan Zhou <singlezombie@163.com> Co-authored-by: tpoisonooo <khj.application@aliyun.com> Co-authored-by: HinGwenWoong <peterhuang0323@outlook.com> Co-authored-by: Junjie <61398820+Adenialzz@users.noreply.github.com> Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com> Co-authored-by: q.yao <streetyao@live.com> Co-authored-by: Song Lin <92794867+triple-Mu@users.noreply.github.com> Co-authored-by: zly19540609 <31341706+zly19540609@users.noreply.github.com> Co-authored-by: RunningLeon <mnsheng@yeah.net> Co-authored-by: HinGwenWoong <peterhuang0323@qq.com> Co-authored-by: AllentDan <41138331+AllentDan@users.noreply.github.com> Co-authored-by: dongchunyu.vendor <dongchunyu@pjlab.org.cn> Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com> Co-authored-by: NagatoYuki0943 <72508155+NagatoYuki0943@users.noreply.github.com> Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com> Co-authored-by: Zaida Zhou <58739961+zhouzaida@users.noreply.github.com> Co-authored-by: chaoqun <cdongae@connect.ust.hk> Co-authored-by: Lakshantha Dissanayake <lakshanthad@yahoo.com> Co-authored-by: Yifan Gu <gyf304@users.noreply.github.com> Co-authored-by: Zhiqiang Wang <zhiqwang@foxmail.com> Co-authored-by: sanjaypavo <93761297+sanjaypavo@users.noreply.github.com>
* fix pose demo and windows build (#307) * add postprocessing_masks gpu version (#276) * add postprocessing_masks gpu version * default device cpu * pre-commit fix Co-authored-by: hadoop-basecv <hadoop-basecv@set-gh-basecv-serving-classify11.mt> * fixed a bug causes text-recognizer to fail when (non-NULL) empty bboxes list is passed (#310) * [Fix] include missing <type_traits> for formatter.h (#313) * fix formatter * relax GCC version requirement * [Fix] MMEditing cannot save results when testing (#336) * fix show * lint * remove redundant codes * resolve comment * type hint * docs(build): fix typo (#352) * docs(build): add missing build option * docs(build): add onnx install * style(doc): trim whitespace * docs(build): revert install onnx * docs(build): add ncnn LD_LIBRARY_PATH * docs(build): fix path error * fix openvino export tmp model, add binary flag (#353) * init circleci (#348) * fix wrong input mat type (#362) * fix wrong input mat type * fix lint * fix(docs): remove redundant doc tree (#360) * fix missing ncnn_DIR & InferenceEngine_DIR (#364) * Fix mmdet openvino dynamic 300x300 cfg base (#372) * Fix: add onnxruntime building option in gpu dockerfile (#366) * Tutorial 03: torch2onnx (#365) * upload doc * add images * resolve comments * update translation * [Docs] fix ncnn docs (#378) * fix ncnn docs` * update 0216 * typo-fix (#397) * add CUDA_TOOKIT_ROOT_DIR as tensorrt detect dir (#357) * add CUDA_TOOKIT_ROOT_DIR as tensorrt detect dir * Update FindTENSORRT.cmake * Fix docs (#398) * ort_net ONNX_TENSOR_ELEMENT_DATA_TYPE_BOOL (#383) * fix wrong buffer which will case onnxruntime-gpu crash with segmentaion (#363) * fix wrong buffer which will case onnxruntime-gpu crash with segmentaion * fix check * fix build error * remove unused header * fix benchmark (#411) * Add `sm_53` in cuda.cmake for Jetson Nano which will cashe when process sdk predict. (#407) * [Fix] fix feature test for `std::source_location` (#416) * fix feature test for `std::source_location` * suppress msvc warnings * fix consistency * fix format string (#417) * [Fix] Fix seg name (#394) * fix seg name * use default name Co-authored-by: dongchunyu.vendor <dongchunyu@pjlab.org.cn> * 【Docs】Add ipython notebook tutorial (#234) * add ipynb file * rename file * add open in colab tag * fix lint and add img show * fix open in colab link * fix comments * fix pre-commit config * fix mmpose api (#396) * fix mmpose api * use fmt::format instead * fix potential nullptr access * [Fix] support latest spdlog (#423) * support formatting `PixelFormat` & `DataType` * format enum for legacy spdlog * fix format * fix pillarencode (#331) * fix ONNXRuntime cuda test bug (#438) * Fix ci in master branch (#441) * [Doc] Improve Jetson tutorial install doc (#381) * Improve Jetson build doc * add torchvision in the doc * Fix lint * Fix lint * Fix lint * Fix arg bug * remove incorrect process * Improve doc * Add more detail on `Conda` * Add python version detail * Install `onnx` instead of `onnxruntime` * Fix gramma * Fix gramma * Update Installation detail and fix some doc detail * Update how_to_install_mmdeploy_on_jetsons.md * Fix tensorrt and cudnn path * Improve FAQ * Improve FAQs * pplcv not switch branch since the `sm_53` missing * Update how_to_install_mmdeploy_on_jetsons.md * Update how_to_install_mmdeploy_on_jetsons.md * Update how_to_install_mmdeploy_on_jetsons.md * Update how_to_install_mmdeploy_on_jetsons.md * Improve doc * Update how_to_install_mmdeploy_on_jetsons.md * export `TENSORRT_DIR` * Using pre-build cmake to update * Improve sentence and add jetpack version * Improve sentence * move TENSORRT_DIR in the `Make TensorRT env` step * Improve CUDA detail * Update how_to_install_mmdeploy_on_jetsons.md * Update how_to_install_mmdeploy_on_jetsons.md * Improve conda installation * Improve TensorRT installation * Fix lint * Add pip crash detail and FAQ * Improve pip crash * refine the jetson installation guide * Improve python version * Improve doc, added some detail * Fix lint * Add detail for `Runtime` problem * Fix word * Update how_to_install_mmdeploy_on_jetsons.md Co-authored-by: lvhan028 <lvhan_028@163.com> * Version comments added, torch install steps added. (#449) * [Docs] Fix API documentation (#443) * [Docs] Fix API documentation * add onnx dependency in readthedocs.txt * fix dependencies * [Fix] Fix display bugs for windows (#451) * fix issue 330 for windows * fix code * fix lint * fix all platform * [Docs] Minor fixes and translation of installation tutorial for Jetson (#415) * minor fixes * add Jetson installation * updated zh_cn based on new en version * If a cuda launch error occurs, verify if cuda device requires top_k t… (#479) * If a cuda launch error occurs, verify if cuda device requires top_k to be reduced. * Fixed lint * Clang format * Fixed lint, clang-format * [Fix] set optional arg a default value (#483) * optional default value * resolve comments Co-authored-by: dongchunyu.vendor <dongchunyu@pjlab.org.cn> * Update: Optimize document (#484) * Update: Optimize document - Minor fixes in styling and grammar - Add support for Jetson Xavier NX (Tested and worked) - Add hardware recommendation - Change JetPack installation guide URL from jp5.0 to jp4.6.1 - Add a note to select "Jetson SDK Components" when using NVIDIA SDK Manager - Change PyTorch wheel save location - Add more dependencies needed for torchvision installation. Otherwise installation error - Simplify torchvision git cloning branch - Add installation times for torchvision, MMCV, versioned-hdf5, ppl.cv, model converter, SDK libraries - Delete "snap" from cmake removal as "apt-get purge" is enough - Add a note on which scenarios you need to append cu da path and libraries to PATH and LD_LIBRARY_PATH - Simplify MMCV git cloning branch - Delete "skip if you don't need MMDeploy C/C++ Inference SDK", because that is the only available inference SDK at the moment - Add more details to object detection demo using C/C++ Inference SDK such as installing MMDetection and converting a model - Add image of inference result - Delete "set env for pip" in troubleshooting because this is already mentioned under "installing Archiconda" Signed-off-by: Lakshantha Dissanayake <lakshanthad@seeed.cc> * Fix: note style on doc * Fix: Trim trailing whitespaces * Update: add source image before inference * fix: bbox_nms not onnxizing if batch size > 1 (#501) A typo prevents nms from onnxizing correctly if batch size is static and greater than 1. * change seperator of function marker (#499) * [docs] Fix typo in tutorial (#509) * Fix docstring format (#495) * Fix doc common * Fix bugs * Tutorial 04: onnx custom op (#508) * Add tutorial04 * lint * add image * resolve comment * fix mmseg twice resize (#480) * fix mmseg twich resize * remove comment * Fix mask test with mismatched device (#511) * align mask output to cpu device * align ncnn ssd output to torch.Tensor type * --amend * compat mmpose v0.26 (#518) * [Docs] adding new backends when using MMDeploy as a third package (#482) * update doc * refine expression * cn doc * Tutorial 05: ONNX Model Editing (#517) * tutorial 05 * Upload image * resolve comments * resolve comment * fix pspnet torchscript conversion (#538) * fix pspnet torchscript conversion * resolve comment * add IR to rewrite * changing the onnxwrapper script for gpu issue (#532) * changing the onnxwrapper script * gpu_issue * Update wrapper.py * Update wrapper.py * Update runtime.txt * Update runtime.txt * Update wrapper.py Co-authored-by: Chen Xin <xinchen.tju@gmail.com> Co-authored-by: Shengxi Li <982783556@qq.com> Co-authored-by: hadoop-basecv <hadoop-basecv@set-gh-basecv-serving-classify11.mt> Co-authored-by: lzhangzz <lzhang329@gmail.com> Co-authored-by: Yifan Zhou <singlezombie@163.com> Co-authored-by: tpoisonooo <khj.application@aliyun.com> Co-authored-by: HinGwenWoong <peterhuang0323@outlook.com> Co-authored-by: Junjie <61398820+Adenialzz@users.noreply.github.com> Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com> Co-authored-by: q.yao <streetyao@live.com> Co-authored-by: Song Lin <92794867+triple-Mu@users.noreply.github.com> Co-authored-by: zly19540609 <31341706+zly19540609@users.noreply.github.com> Co-authored-by: RunningLeon <mnsheng@yeah.net> Co-authored-by: HinGwenWoong <peterhuang0323@qq.com> Co-authored-by: AllentDan <41138331+AllentDan@users.noreply.github.com> Co-authored-by: dongchunyu.vendor <dongchunyu@pjlab.org.cn> Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com> Co-authored-by: NagatoYuki0943 <72508155+NagatoYuki0943@users.noreply.github.com> Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com> Co-authored-by: Zaida Zhou <58739961+zhouzaida@users.noreply.github.com> Co-authored-by: chaoqun <cdongae@connect.ust.hk> Co-authored-by: Lakshantha Dissanayake <lakshanthad@yahoo.com> Co-authored-by: Yifan Gu <gyf304@users.noreply.github.com> Co-authored-by: Zhiqiang Wang <zhiqwang@foxmail.com> Co-authored-by: sanjaypavo <93761297+sanjaypavo@users.noreply.github.com>
Signed-off-by: Lakshantha Dissanayake lakshanthad@seeed.cc