diff --git a/README.md b/README.md index 217ce0ba59..5adeff848f 100644 --- a/README.md +++ b/README.md @@ -27,10 +27,10 @@ [![Percentage of issues still open](https://isitmaintained.com/badge/open/open-mmlab/mmpose.svg)](https://github.com/open-mmlab/mmpose/issues) [📘Documentation](https://mmpose.readthedocs.io/en/1.x/) | -[🛠️Installation](https://mmpose.readthedocs.io/en/1.x/install.html) | -[👀Model Zoo](https://mmpose.readthedocs.io/en/1.x/modelzoo.html) | -[📜Papers](https://mmpose.readthedocs.io/en/1.x/papers/algorithms.html) | -[🆕Update News](https://mmpose.readthedocs.io/en/1.x/changelog.html) | +[🛠️Installation](https://mmpose.readthedocs.io/en/1.x/installation.html) | +[👀Model Zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo.html) | +[📜Papers](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/algorithms.html) | +[🆕Update News](https://mmpose.readthedocs.io/en/1.x/notes/changelog.html) | [🤔Reporting Issues](https://github.com/open-mmlab/mmpose/issues/new/choose) @@ -52,17 +52,17 @@ https://user-images.githubusercontent.com/15977946/124654387-0fd3c500-ded1-11eb- - **Support diverse tasks** We support a wide spectrum of mainstream pose analysis tasks in current research community, including 2d multi-person human pose estimation, 2d hand pose estimation, 2d face landmark detection, 133 keypoint whole-body human pose estimation, 3d human mesh recovery, fashion landmark detection and animal pose estimation. - See [demo.md](demo/README.md) for more information. + See [Demo](demo/docs/) for more information. - **Higher efficiency and higher accuracy** MMPose implements multiple state-of-the-art (SOTA) deep learning models, including both top-down & bottom-up approaches. We achieve faster training speed and higher accuracy than other popular codebases, such as [HRNet](https://github.com/leoxiaobin/deep-high-resolution-net.pytorch). - See [benchmark.md](docs/en/benchmark.md) for more information. + See [benchmark.md](docs/en/notes/benchmark.md) for more information. - **Support for various datasets** The toolbox directly supports multiple popular and representative datasets, COCO, AIC, MPII, MPII-TRB, OCHuman etc. - See [data_preparation.md](docs/en/data_preparation.md) for more information. + See [dataset_zoo](docs/en/dataset_zoo) for more information. - **Well designed, tested and documented** @@ -99,17 +99,17 @@ Please refer to [installation.md](https://mmpose.readthedocs.io/en/1.x/installat We provided a series of tutorials about the basic usage of MMPose for new users: -- [About Configs](https://mmpose.readthedocs.io/en/1.x/user_guides/configs.md) -- [Add New Dataset](https://mmpose.readthedocs.io/en/1.x/user_guides/prepare_datasets.md) -- [Keypoint Encoding & Decoding](https://mmpose.readthedocs.io/en/1.x/user_guides/codecs.md) -- [Inference with Existing Models](https://mmpose.readthedocs.io/en/1.x/user_guides/inference.md) -- [Train and Test](https://mmpose.readthedocs.io/en/1.x/user_guides/train_and_test.md) -- [Visualization Tools](https://mmpose.readthedocs.io/en/1.x/user_guides/visualization.md) -- [Other Useful Tools](https://mmpose.readthedocs.io/en/1.x/user_guides/useful_tools.md) +- [About Configs](https://mmpose.readthedocs.io/en/1.x/user_guides/configs.html) +- [Add New Dataset](https://mmpose.readthedocs.io/en/1.x/user_guides/prepare_datasets.html) +- [Keypoint Encoding & Decoding](https://mmpose.readthedocs.io/en/1.x/user_guides/codecs.html) +- [Inference with Existing Models](https://mmpose.readthedocs.io/en/1.x/user_guides/inference.html) +- [Train and Test](https://mmpose.readthedocs.io/en/1.x/user_guides/train_and_test.html) +- [Visualization Tools](https://mmpose.readthedocs.io/en/1.x/user_guides/visualization.html) +- [Other Useful Tools](https://mmpose.readthedocs.io/en/1.x/user_guides/useful_tools.html) ## Model Zoo -Results and models are available in the *README.md* of each method's config directory. +Results and models are available in the **README.md** of each method's config directory. A summary can be found in the [Model Zoo](https://mmpose.readthedocs.io/en/1.x/modelzoo.html) page.
diff --git a/README_CN.md b/README_CN.md index 6df2349eed..4e33bdf8b0 100644 --- a/README_CN.md +++ b/README_CN.md @@ -27,10 +27,10 @@ [![Percentage of issues still open](https://isitmaintained.com/badge/open/open-mmlab/mmpose.svg)](https://github.com/open-mmlab/mmpose/issues) [📘文档](https://mmpose.readthedocs.io/zh_CN/1.x/) | -[🛠️安装](https://mmpose.readthedocs.io/zh_CN/1.x/install.html) | -[👀模型库](https://mmpose.readthedocs.io/zh_CN/1.x/modelzoo.html) | -[📜论文库](https://mmpose.readthedocs.io/zh_CN/1.x/papers/algorithms.html) | -[🆕更新日志](https://mmpose.readthedocs.io/en/1.x/changelog.html) | +[🛠️安装](https://mmpose.readthedocs.io/zh_CN/1.x/installation.html) | +[👀模型库](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo.html) | +[📜论文库](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/algorithms.html) | +[🆕更新日志](https://mmpose.readthedocs.io/zh_CN/1.x/notes/changelog.html) | [🤔报告问题](https://github.com/open-mmlab/mmpose/issues/new/choose) @@ -39,7 +39,7 @@ [English](./README.md) | 简体中文 -MMPose 是一款基于 PyTorch 的姿态分析的开源工具箱,是 [OpenMMLab](http://openmmlab.org/) 项目的成员之一。 +MMPose 是一款基于 PyTorch 的姿态分析的开源工具箱,是 [OpenMMLab](https://github.com/open-mmlab) 项目的成员之一。 主分支代码目前支持 **PyTorch 1.6 以上**的版本。 @@ -51,16 +51,16 @@ https://user-images.githubusercontent.com/15977946/124654387-0fd3c500-ded1-11eb- - **支持多种人体姿态分析相关任务** MMPose 支持当前学界广泛关注的主流姿态分析任务:主要包括 2D多人姿态估计、2D手部姿态估计、2D人脸关键点检测、133关键点的全身人体姿态估计、3D人体形状恢复、服饰关键点检测、动物关键点检测等。 - 具体请参考 [功能演示](demo/README.md)。 + 具体请参考 [功能演示](demo/docs/)。 - **更高的精度和更快的速度** MMPose 复现了多种学界最先进的人体姿态分析模型,包括“自顶向下”和“自底向上”两大类算法。MMPose 相比于其他主流的代码库,具有更高的模型精度和训练速度。 - 具体请参考 [基准测试](docs/en/benchmark.md)(英文)。 + 具体请参考 [基准测试](docs/en/notes/benchmark.md)(英文)。 - **支持多样的数据集** - MMPose 支持了很多主流数据集的准备和构建,如 COCO、 MPII 等。 具体请参考 [数据集准备](docs/en/data_preparation.md)。 + MMPose 支持了很多主流数据集的准备和构建,如 COCO、 MPII 等。 具体请参考 [数据集](docs/zh_cn/dataset_zoo)。 - **模块化设计** @@ -74,7 +74,7 @@ https://user-images.githubusercontent.com/15977946/124654387-0fd3c500-ded1-11eb- ## 最新进展 -- 2022-05-05: MMPose [v1.0.0b0](https://github.com/open-mmlab/mmpose/releases/tag/1.x) 已经发布. 主要更新包括: +- 2022-09-01: MMPose MMPose [v1.0.0b0](https://github.com/open-mmlab/mmpose/releases/tag/v1.0.0b0) 已经发布. 主要更新包括: - 对 MMPose 进行了重大重构,旨在提升算法库性能和可扩展性,并使其更容易上手。 - 基于一个全新的,可扩展性强的训练和测试引擎,但目前仍在开发中。欢迎根据[文档](https://mmpose.readthedocs.io/zh_CN/1.x/)进行试用。 - 新版本中存在一些与旧版本不兼容的修改。请查看[迁移文档](https://mmpose.readthedocs.io/zh_CN/1.x/migration.html)来详细了解这些变动。 @@ -96,119 +96,119 @@ cd mmpose mim install -e ``` -关于安装的详细说明请参考[文档](https://mmpose.readthedocs.io/zh_CN/1.x/installation.html)。 +关于安装的详细说明请参考[安装文档](https://mmpose.readthedocs.io/zh_CN/1.x/installation.html)。 ## 教程 我们提供了一系列简明的教程,帮助 MMPose 的新用户轻松上手使用: -- [学习配置文件](https://mmpose.readthedocs.io/zh_CN/1.x/user_guides/configs.md) -- [准备数据集](https://mmpose.readthedocs.io/zh_CN/1.x/user_guides/prepare_datasets.md) -- [关键点编码、解码机制](https://mmpose.readthedocs.io/zh_CN/1.x/user_guides/codecs.md) -- [使用现有模型推理](https://mmpose.readthedocs.io/zh_CN/1.x/user_guides/inference.md) -- [模型训练和测试](https://mmpose.readthedocs.io/zh_CN/1.x/user_guides/train_and_test.md) -- [可视化工具](https://mmpose.readthedocs.io/zh_CN/1.x/user_guides/visualization.md) -- [其他实用工具](https://mmpose.readthedocs.io/zh_CN/1.x/user_guides/useful_tools.md) +- [学习配置文件](https://mmpose.readthedocs.io/zh_CN/1.x/user_guides/configs.html) +- [准备数据集](https://mmpose.readthedocs.io/zh_CN/1.x/user_guides/prepare_datasets.html) +- [关键点编码、解码机制](https://mmpose.readthedocs.io/zh_CN/1.x/user_guides/codecs.html) +- [使用现有模型推理](https://mmpose.readthedocs.io/zh_CN/1.x/user_guides/inference.html) +- [模型训练和测试](https://mmpose.readthedocs.io/zh_CN/1.x/user_guides/train_and_test.html) +- [可视化工具](https://mmpose.readthedocs.io/zh_CN/1.x/user_guides/visualization.html) +- [其他实用工具](https://mmpose.readthedocs.io/zh_CN/1.x/user_guides/useful_tools.html) ## 模型库 -各个模型的结果和设置都可以在对应的 config(配置)目录下的 *README.md* 中查看。 -整体的概况也可也在 [模型库](https://mmpose.readthedocs.io/zh_CN/1.x/recognition_models.html) 页面中查看。 +各个模型的结果和设置都可以在对应的 config(配置)目录下的 **README.md** 中查看。 +整体的概况也可也在 [模型库](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo.html) 页面中查看。
支持的算法 -- [x] [DeepPose](https://mmpose.readthedocs.io/zh_CN/1.x/papers/algorithms.html#deeppose-cvpr-2014) (CVPR'2014) -- [x] [CPM](https://mmpose.readthedocs.io/zh_CN/1.x/papers/backbones.html#cpm-cvpr-2016) (CVPR'2016) -- [x] [Hourglass](https://mmpose.readthedocs.io/zh_CN/1.x/papers/backbones.html#hourglass-eccv-2016) (ECCV'2016) -- [x] [SimpleBaseline3D](https://mmpose.readthedocs.io/zh_CN/1.x/papers/algorithms.html#simplebaseline3d-iccv-2017) (ICCV'2017) -- [x] [Associative Embedding](https://mmpose.readthedocs.io/zh_CN/1.x/papers/algorithms.html#associative-embedding-nips-2017) (NeurIPS'2017) -- [x] [HMR](https://mmpose.readthedocs.io/zh_CN/1.x/papers/algorithms.html#hmr-cvpr-2018) (CVPR'2018) -- [x] [SimpleBaseline2D](https://mmpose.readthedocs.io/zh_CN/1.x/papers/algorithms.html#simplebaseline2d-eccv-2018) (ECCV'2018) -- [x] [HRNet](https://mmpose.readthedocs.io/zh_CN/1.x/papers/backbones.html#hrnet-cvpr-2019) (CVPR'2019) -- [x] [VideoPose3D](https://mmpose.readthedocs.io/zh_CN/1.x/papers/algorithms.html#videopose3d-cvpr-2019) (CVPR'2019) -- [x] [HRNetv2](https://mmpose.readthedocs.io/zh_CN/1.x/papers/backbones.html#hrnetv2-tpami-2019) (TPAMI'2019) -- [x] [MSPN](https://mmpose.readthedocs.io/zh_CN/1.x/papers/backbones.html#mspn-arxiv-2019) (ArXiv'2019) -- [x] [SCNet](https://mmpose.readthedocs.io/zh_CN/1.x/papers/backbones.html#scnet-cvpr-2020) (CVPR'2020) -- [x] [HigherHRNet](https://mmpose.readthedocs.io/zh_CN/1.x/papers/backbones.html#higherhrnet-cvpr-2020) (CVPR'2020) -- [x] [RSN](https://mmpose.readthedocs.io/zh_CN/1.x/papers/backbones.html#rsn-eccv-2020) (ECCV'2020) -- [x] [InterNet](https://mmpose.readthedocs.io/zh_CN/1.x/papers/algorithms.html#internet-eccv-2020) (ECCV'2020) -- [x] [VoxelPose](https://mmpose.readthedocs.io/zh_CN/1.x/papers/algorithms.html#voxelpose-eccv-2020) (ECCV'2020 -- [x] [LiteHRNet](https://mmpose.readthedocs.io/zh_CN/1.x/papers/backbones.html#litehrnet-cvpr-2021) (CVPR'2021) -- [x] [ViPNAS](https://mmpose.readthedocs.io/zh_CN/1.x/papers/backbones.html#vipnas-cvpr-2021) (CVPR'2021) +- [x] [DeepPose](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/algorithms.html#deeppose-cvpr-2014) (CVPR'2014) +- [x] [CPM](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#cpm-cvpr-2016) (CVPR'2016) +- [x] [Hourglass](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#hourglass-eccv-2016) (ECCV'2016) +- [x] [SimpleBaseline3D](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/algorithms.html#simplebaseline3d-iccv-2017) (ICCV'2017) +- [x] [Associative Embedding](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/algorithms.html#associative-embedding-nips-2017) (NeurIPS'2017) +- [x] [HMR](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/algorithms.html#hmr-cvpr-2018) (CVPR'2018) +- [x] [SimpleBaseline2D](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/algorithms.html#simplebaseline2d-eccv-2018) (ECCV'2018) +- [x] [HRNet](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#hrnet-cvpr-2019) (CVPR'2019) +- [x] [VideoPose3D](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/algorithms.html#videopose3d-cvpr-2019) (CVPR'2019) +- [x] [HRNetv2](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#hrnetv2-tpami-2019) (TPAMI'2019) +- [x] [MSPN](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#mspn-arxiv-2019) (ArXiv'2019) +- [x] [SCNet](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#scnet-cvpr-2020) (CVPR'2020) +- [x] [HigherHRNet](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#higherhrnet-cvpr-2020) (CVPR'2020) +- [x] [RSN](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#rsn-eccv-2020) (ECCV'2020) +- [x] [InterNet](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/algorithms.html#internet-eccv-2020) (ECCV'2020) +- [x] [VoxelPose](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/algorithms.html#voxelpose-eccv-2020) (ECCV'2020 +- [x] [LiteHRNet](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#litehrnet-cvpr-2021) (CVPR'2021) +- [x] [ViPNAS](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#vipnas-cvpr-2021) (CVPR'2021)
支持的技术 -- [x] [FPN](https://mmpose.readthedocs.io/zh_CN/1.x/papers/techniques.html#fpn-cvpr-2017) (CVPR'2017) -- [x] [FP16](https://mmpose.readthedocs.io/zh_CN/1.x/papers/techniques.html#fp16-arxiv-2017) (ArXiv'2017) -- [x] [Wingloss](https://mmpose.readthedocs.io/zh_CN/1.x/papers/techniques.html#wingloss-cvpr-2018) (CVPR'2018) -- [x] [AdaptiveWingloss](https://mmpose.readthedocs.io/zh_CN/1.x/papers/techniques.html#adaptivewingloss-iccv-2019) (ICCV'2019) -- [x] [DarkPose](https://mmpose.readthedocs.io/zh_CN/1.x/papers/techniques.html#darkpose-cvpr-2020) (CVPR'2020) -- [x] [UDP](https://mmpose.readthedocs.io/zh_CN/1.x/papers/techniques.html#udp-cvpr-2020) (CVPR'2020) -- [x] [Albumentations](https://mmpose.readthedocs.io/zh_CN/1.x/papers/techniques.html#albumentations-information-2020) (Information'2020) -- [x] [SoftWingloss](https://mmpose.readthedocs.io/zh_CN/1.x/papers/techniques.html#softwingloss-tip-2021) (TIP'2021) +- [x] [FPN](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/techniques.html#fpn-cvpr-2017) (CVPR'2017) +- [x] [FP16](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/techniques.html#fp16-arxiv-2017) (ArXiv'2017) +- [x] [Wingloss](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/techniques.html#wingloss-cvpr-2018) (CVPR'2018) +- [x] [AdaptiveWingloss](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/techniques.html#adaptivewingloss-iccv-2019) (ICCV'2019) +- [x] [DarkPose](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/techniques.html#darkpose-cvpr-2020) (CVPR'2020) +- [x] [UDP](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/techniques.html#udp-cvpr-2020) (CVPR'2020) +- [x] [Albumentations](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/techniques.html#albumentations-information-2020) (Information'2020) +- [x] [SoftWingloss](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/techniques.html#softwingloss-tip-2021) (TIP'2021) - [x] [SmoothNet](/configs/_base_/filters/smoothnet_h36m.md) (arXiv'2021) -- [x] [RLE](https://mmpose.readthedocs.io/zh_CN/1.x/papers/techniques.html#rle-iccv-2021) (ICCV'2021) +- [x] [RLE](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/techniques.html#rle-iccv-2021) (ICCV'2021)
-支持的数据集 - -- [x] [AFLW](https://mmpose.readthedocs.io/zh_CN/1.x/papers/datasets.html#aflw-iccvw-2011) \[[homepage](https://www.tugraz.at/institute/icg/research/team-bischof/lrs/downloads/aflw/)\] (ICCVW'2011) -- [x] [sub-JHMDB](https://mmpose.readthedocs.io/zh_CN/1.x/papers/datasets.html#jhmdb-iccv-2013) \[[homepage](http://jhmdb.is.tue.mpg.de/dataset)\] (ICCV'2013) -- [x] [COFW](https://mmpose.readthedocs.io/zh_CN/1.x/papers/datasets.html#cofw-iccv-2013) \[[homepage](http://www.vision.caltech.edu/xpburgos/ICCV13/)\] (ICCV'2013) -- [x] [MPII](https://mmpose.readthedocs.io/zh_CN/1.x/papers/datasets.html#mpii-cvpr-2014) \[[homepage](http://human-pose.mpi-inf.mpg.de/)\] (CVPR'2014) -- [x] [Human3.6M](https://mmpose.readthedocs.io/zh_CN/1.x/papers/datasets.html#human3-6m-tpami-2014) \[[homepage](http://vision.imar.ro/human3.6m/description.php)\] (TPAMI'2014) -- [x] [COCO](https://mmpose.readthedocs.io/zh_CN/1.x/papers/datasets.html#coco-eccv-2014) \[[homepage](http://cocodataset.org/)\] (ECCV'2014) -- [x] [CMU Panoptic](https://mmpose.readthedocs.io/zh_CN/1.x/papers/datasets.html#cmu-panoptic-iccv-2015) (ICCV'2015) -- [x] [DeepFashion](https://mmpose.readthedocs.io/zh_CN/1.x/papers/datasets.html#deepfashion-cvpr-2016) \[[homepage](http://mmlab.ie.cuhk.edu.hk/projects/DeepFashion/LandmarkDetection.html)\] (CVPR'2016) -- [x] [300W](https://mmpose.readthedocs.io/zh_CN/1.x/papers/datasets.html#300w-imavis-2016) \[[homepage](https://ibug.doc.ic.ac.uk/resources/300-W/)\] (IMAVIS'2016) -- [x] [RHD](https://mmpose.readthedocs.io/zh_CN/1.x/papers/datasets.html#rhd-iccv-2017) \[[homepage](https://lmb.informatik.uni-freiburg.de/resources/datasets/RenderedHandposeDataset.en.html)\] (ICCV'2017) -- [x] [CMU Panoptic](https://mmpose.readthedocs.io/zh_CN/1.x/papers/datasets.html#cmu-panoptic-iccv-2015) \[[homepage](http://domedb.perception.cs.cmu.edu/)\] (ICCV'2015) -- [x] [AI Challenger](https://mmpose.readthedocs.io/zh_CN/1.x/papers/datasets.html#ai-challenger-arxiv-2017) \[[homepage](https://github.com/AIChallenger/AI_Challenger_2017)\] (ArXiv'2017) -- [x] [MHP](https://mmpose.readthedocs.io/zh_CN/1.x/papers/datasets.html#mhp-acm-mm-2018) \[[homepage](https://lv-mhp.github.io/dataset)\] (ACM MM'2018) -- [x] [WFLW](https://mmpose.readthedocs.io/zh_CN/1.x/papers/datasets.html#wflw-cvpr-2018) \[[homepage](https://wywu.github.io/projects/LAB/WFLW.html)\] (CVPR'2018) -- [x] [PoseTrack18](https://mmpose.readthedocs.io/zh_CN/1.x/papers/datasets.html#posetrack18-cvpr-2018) \[[homepage](https://posetrack.net/users/download.php)\] (CVPR'2018) -- [x] [OCHuman](https://mmpose.readthedocs.io/zh_CN/1.x/papers/datasets.html#ochuman-cvpr-2019) \[[homepage](https://github.com/liruilong940607/OCHumanApi)\] (CVPR'2019) -- [x] [CrowdPose](https://mmpose.readthedocs.io/zh_CN/1.x/papers/datasets.html#crowdpose-cvpr-2019) \[[homepage](https://github.com/Jeff-sjtu/CrowdPose)\] (CVPR'2019) -- [x] [MPII-TRB](https://mmpose.readthedocs.io/zh_CN/1.x/papers/datasets.html#mpii-trb-iccv-2019) \[[homepage](https://github.com/kennymckormick/Triplet-Representation-of-human-Body)\] (ICCV'2019) -- [x] [FreiHand](https://mmpose.readthedocs.io/zh_CN/1.x/papers/datasets.html#freihand-iccv-2019) \[[homepage](https://lmb.informatik.uni-freiburg.de/projects/freihand/)\] (ICCV'2019) -- [x] [Animal-Pose](https://mmpose.readthedocs.io/zh_CN/1.x/papers/datasets.html#animal-pose-iccv-2019) \[[homepage](https://sites.google.com/view/animal-pose/)\] (ICCV'2019) -- [x] [OneHand10K](https://mmpose.readthedocs.io/zh_CN/1.x/papers/datasets.html#onehand10k-tcsvt-2019) \[[homepage](https://www.yangangwang.com/papers/WANG-MCC-2018-10.html)\] (TCSVT'2019) -- [x] [Vinegar Fly](https://mmpose.readthedocs.io/zh_CN/1.x/papers/datasets.html#vinegar-fly-nature-methods-2019) \[[homepage](https://github.com/jgraving/DeepPoseKit-Data)\] (Nature Methods'2019) -- [x] [Desert Locust](https://mmpose.readthedocs.io/zh_CN/1.x/papers/datasets.html#desert-locust-elife-2019) \[[homepage](https://github.com/jgraving/DeepPoseKit-Data)\] (Elife'2019) -- [x] [Grévy’s Zebra](https://mmpose.readthedocs.io/zh_CN/1.x/papers/datasets.html#grevys-zebra-elife-2019) \[[homepage](https://github.com/jgraving/DeepPoseKit-Data)\] (Elife'2019) -- [x] [ATRW](https://mmpose.readthedocs.io/zh_CN/1.x/papers/datasets.html#atrw-acm-mm-2020) \[[homepage](https://cvwc2019.github.io/challenge.html)\] (ACM MM'2020) -- [x] [Halpe](https://mmpose.readthedocs.io/zh_CN/1.x/papers/datasets.html#halpe-cvpr-2020) \[[homepage](https://github.com/Fang-Haoshu/Halpe-FullBody/)\] (CVPR'2020) -- [x] [COCO-WholeBody](https://mmpose.readthedocs.io/zh_CN/1.x/papers/datasets.html#coco-wholebody-eccv-2020) \[[homepage](https://github.com/jin-s13/COCO-WholeBody/)\] (ECCV'2020) -- [x] [MacaquePose](https://mmpose.readthedocs.io/zh_CN/1.x/papers/datasets.html#macaquepose-biorxiv-2020) \[[homepage](http://www.pri.kyoto-u.ac.jp/datasets/macaquepose/index.html)\] (bioRxiv'2020) -- [x] [InterHand2.6M](https://mmpose.readthedocs.io/zh_CN/1.x/papers/datasets.html#interhand2-6m-eccv-2020) \[[homepage](https://mks0601.github.io/InterHand2.6M/)\] (ECCV'2020) -- [x] [AP-10K](https://mmpose.readthedocs.io/en/1.x/papers/datasets.html#ap-10k-neurips-2021) \[[homepage](https://github.com/AlexTheBad/AP-10K)\] (NeurIPS'2021) -- [x] [Horse-10](https://mmpose.readthedocs.io/zh_CN/1.x/papers/datasets.html#horse-10-wacv-2021) \[[homepage](http://www.mackenziemathislab.org/horse10)\] (WACV'2021) +支持的数据集 + +- [x] [AFLW](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#aflw-iccvw-2011) \[[主页](https://www.tugraz.at/institute/icg/research/team-bischof/lrs/downloads/aflw/)\] (ICCVW'2011) +- [x] [sub-JHMDB](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#jhmdb-iccv-2013) \[[主页](http://jhmdb.is.tue.mpg.de/dataset)\] (ICCV'2013) +- [x] [COFW](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#cofw-iccv-2013) \[[主页](http://www.vision.caltech.edu/xpburgos/ICCV13/)\] (ICCV'2013) +- [x] [MPII](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#mpii-cvpr-2014) \[[主页](http://human-pose.mpi-inf.mpg.de/)\] (CVPR'2014) +- [x] [Human3.6M](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#human3-6m-tpami-2014) \[[主页](http://vision.imar.ro/human3.6m/description.php)\] (TPAMI'2014) +- [x] [COCO](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#coco-eccv-2014) \[[主页](http://cocodataset.org/)\] (ECCV'2014) +- [x] [CMU Panoptic](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#cmu-panoptic-iccv-2015) (ICCV'2015) +- [x] [DeepFashion](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#deepfashion-cvpr-2016) \[[主页](http://mmlab.ie.cuhk.edu.hk/projects/DeepFashion/LandmarkDetection.html)\] (CVPR'2016) +- [x] [300W](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#300w-imavis-2016) \[[主页](https://ibug.doc.ic.ac.uk/resources/300-W/)\] (IMAVIS'2016) +- [x] [RHD](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#rhd-iccv-2017) \[[主页](https://lmb.informatik.uni-freiburg.de/resources/datasets/RenderedHandposeDataset.en.html)\] (ICCV'2017) +- [x] [CMU Panoptic](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#cmu-panoptic-iccv-2015) \[[主页](http://domedb.perception.cs.cmu.edu/)\] (ICCV'2015) +- [x] [AI Challenger](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#ai-challenger-arxiv-2017) \[[主页](https://github.com/AIChallenger/AI_Challenger_2017)\] (ArXiv'2017) +- [x] [MHP](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#mhp-acm-mm-2018) \[[主页](https://lv-mhp.github.io/dataset)\] (ACM MM'2018) +- [x] [WFLW](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#wflw-cvpr-2018) \[[主页](https://wywu.github.io/projects/LAB/WFLW.html)\] (CVPR'2018) +- [x] [PoseTrack18](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#posetrack18-cvpr-2018) \[[主页](https://posetrack.net/users/download.php)\] (CVPR'2018) +- [x] [OCHuman](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#ochuman-cvpr-2019) \[[主页](https://github.com/liruilong940607/OCHumanApi)\] (CVPR'2019) +- [x] [CrowdPose](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#crowdpose-cvpr-2019) \[[主页](https://github.com/Jeff-sjtu/CrowdPose)\] (CVPR'2019) +- [x] [MPII-TRB](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#mpii-trb-iccv-2019) \[[主页](https://github.com/kennymckormick/Triplet-Representation-of-human-Body)\] (ICCV'2019) +- [x] [FreiHand](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#freihand-iccv-2019) \[[主页](https://lmb.informatik.uni-freiburg.de/projects/freihand/)\] (ICCV'2019) +- [x] [Animal-Pose](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#animal-pose-iccv-2019) \[[主页](https://sites.google.com/view/animal-pose/)\] (ICCV'2019) +- [x] [OneHand10K](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#onehand10k-tcsvt-2019) \[[主页](https://www.yangangwang.com/papers/WANG-MCC-2018-10.html)\] (TCSVT'2019) +- [x] [Vinegar Fly](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#vinegar-fly-nature-methods-2019) \[[主页](https://github.com/jgraving/DeepPoseKit-Data)\] (Nature Methods'2019) +- [x] [Desert Locust](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#desert-locust-elife-2019) \[[主页](https://github.com/jgraving/DeepPoseKit-Data)\] (Elife'2019) +- [x] [Grévy’s Zebra](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#grevys-zebra-elife-2019) \[[主页](https://github.com/jgraving/DeepPoseKit-Data)\] (Elife'2019) +- [x] [ATRW](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#atrw-acm-mm-2020) \[[主页](https://cvwc2019.github.io/challenge.html)\] (ACM MM'2020) +- [x] [Halpe](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#halpe-cvpr-2020) \[[主页](https://github.com/Fang-Haoshu/Halpe-FullBody/)\] (CVPR'2020) +- [x] [COCO-WholeBody](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#coco-wholebody-eccv-2020) \[[主页](https://github.com/jin-s13/COCO-WholeBody/)\] (ECCV'2020) +- [x] [MacaquePose](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#macaquepose-biorxiv-2020) \[[主页](http://www.pri.kyoto-u.ac.jp/datasets/macaquepose/index.html)\] (bioRxiv'2020) +- [x] [InterHand2.6M](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#interhand2-6m-eccv-2020) \[[主页](https://mks0601.github.io/InterHand2.6M/)\] (ECCV'2020) +- [x] [AP-10K](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/datasets.html#ap-10k-neurips-2021) \[[主页](https://github.com/AlexTheBad/AP-10K)\] (NeurIPS'2021) +- [x] [Horse-10](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#horse-10-wacv-2021) \[[主页](http://www.mackenziemathislab.org/horse10)\] (WACV'2021)
支持的骨干网络 -- [x] [AlexNet](https://mmpose.readthedocs.io/zh_CN/1.x/papers/backbones.html#alexnet-neurips-2012) (NeurIPS'2012) -- [x] [VGG](https://mmpose.readthedocs.io/zh_CN/1.x/papers/backbones.html#vgg-iclr-2015) (ICLR'2015) -- [x] [ResNet](https://mmpose.readthedocs.io/zh_CN/1.x/papers/backbones.html#resnet-cvpr-2016) (CVPR'2016) -- [x] [ResNext](https://mmpose.readthedocs.io/zh_CN/1.x/papers/backbones.html#resnext-cvpr-2017) (CVPR'2017) -- [x] [SEResNet](https://mmpose.readthedocs.io/zh_CN/1.x/papers/backbones.html#seresnet-cvpr-2018) (CVPR'2018) -- [x] [ShufflenetV1](https://mmpose.readthedocs.io/zh_CN/1.x/papers/backbones.html#shufflenetv1-cvpr-2018) (CVPR'2018) -- [x] [ShufflenetV2](https://mmpose.readthedocs.io/zh_CN/1.x/papers/backbones.html#shufflenetv2-eccv-2018) (ECCV'2018) -- [x] [MobilenetV2](https://mmpose.readthedocs.io/zh_CN/1.x/papers/backbones.html#mobilenetv2-cvpr-2018) (CVPR'2018) -- [x] [ResNetV1D](https://mmpose.readthedocs.io/zh_CN/1.x/papers/backbones.html#resnetv1d-cvpr-2019) (CVPR'2019) -- [x] [ResNeSt](https://mmpose.readthedocs.io/zh_CN/1.x/papers/backbones.html#resnest-arxiv-2020) (ArXiv'2020) -- [x] [Swin](https://mmpose.readthedocs.io/en/1.x/papers/backbones.html#swin-cvpr-2021) (CVPR'2021) -- [x] [HRFormer](https://mmpose.readthedocs.io/zh_CN/1.x/papers/backbones.html#hrformer-nips-2021) (NIPS'2021) -- [x] [PVT](https://mmpose.readthedocs.io/zh_CN/1.x/papers/backbones.html#pvt-iccv-2021) (ICCV'2021) -- [x] [PVTV2](https://mmpose.readthedocs.io/zh_CN/1.x/papers/backbones.html#pvtv2-cvmj-2022) (CVMJ'2022) +- [x] [AlexNet](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#alexnet-neurips-2012) (NeurIPS'2012) +- [x] [VGG](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#vgg-iclr-2015) (ICLR'2015) +- [x] [ResNet](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#resnet-cvpr-2016) (CVPR'2016) +- [x] [ResNext](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#resnext-cvpr-2017) (CVPR'2017) +- [x] [SEResNet](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#seresnet-cvpr-2018) (CVPR'2018) +- [x] [ShufflenetV1](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#shufflenetv1-cvpr-2018) (CVPR'2018) +- [x] [ShufflenetV2](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#shufflenetv2-eccv-2018) (ECCV'2018) +- [x] [MobilenetV2](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#mobilenetv2-cvpr-2018) (CVPR'2018) +- [x] [ResNetV1D](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#resnetv1d-cvpr-2019) (CVPR'2019) +- [x] [ResNeSt](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#resnest-arxiv-2020) (ArXiv'2020) +- [x] [Swin](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/backbones.html#swin-cvpr-2021) (CVPR'2021) +- [x] [HRFormer](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#hrformer-nips-2021) (NIPS'2021) +- [x] [PVT](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#pvt-iccv-2021) (ICCV'2021) +- [x] [PVTV2](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#pvtv2-cvmj-2022) (CVMJ'2022)
@@ -246,7 +246,7 @@ MMPose 是一款由不同学校和公司共同贡献的开源项目。我们感 - [MMEngine](https://github.com/open-mmlab/mmengine): OpenMMLab 深度学习模型训练基础库 - [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab 计算机视觉基础库 -- [MIM](https://github.com/open-mmlab/mim): MIM 是 OpenMMlab 项目、算法、模型的统一入口 +- [MIM](https://github.com/open-mmlab/mim): OpenMMlab 项目、算法、模型的统一入口 - [MMClassification](https://github.com/open-mmlab/mmclassification): OpenMMLab 图像分类工具箱 - [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab 目标检测工具箱 - [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab 新一代通用 3D 目标检测平台 diff --git a/demo/docs/2d_animal_demo.md b/demo/docs/2d_animal_demo.md index 74e28c9f6f..38ee3078ea 100644 --- a/demo/docs/2d_animal_demo.md +++ b/demo/docs/2d_animal_demo.md @@ -15,7 +15,7 @@ python demo/topdown_demo_with_mmdet.py \ [--device ${GPU_ID or CPU}] ``` -The pre-trained animal pose estimation model can be found from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/animal.html). +The pre-trained animal pose estimation model can be found from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/animal_2d_keypoint.html). Take [animalpose model](https://download.openmmlab.com/mmpose/animal/hrnet/hrnet_w32_animalpose_256x256-1aa7f075_20210426.pth) as an example: ```shell @@ -39,7 +39,7 @@ The augement `--det-cat-id=15` selected detected bounding boxes with label 'cat' **COCO-animals** In COCO dataset, there are 80 object categories, including 10 common `animal` categories (14: 'bird', 15: 'cat', 16: 'dog', 17: 'horse', 18: 'sheep', 19: 'cow', 20: 'elephant', 21: 'bear', 22: 'zebra', 23: 'giraffe'). -For other animals, we have also provided some pre-trained animal detection models (1-class models). Supported models can be found in [det model zoo](/demo/docs/mmdet_modelzoo.md). +For other animals, we have also provided some pre-trained animal detection models (1-class models). Supported models can be found in [detection model zoo](/demo/docs/mmdet_modelzoo.md). To save visualized results on disk: @@ -67,7 +67,7 @@ python demo/topdown_demo_with_mmdet.py \ ### 2D Animal Pose Video Demo -Videos share same interface with images. The difference is, the `${INPUT_PATH}` for videos can be the local path or **URL** link to video file. +Videos share the same interface with images. The difference is that the `${INPUT_PATH}` for videos can be the local path or **URL** link to video file. For example, @@ -89,5 +89,5 @@ The original video can be downloaded from [Google Drive](https://drive.google.co Some tips to speed up MMPose inference: -1. set `model.test_cfg.flip_test=False` in [animalpose_hrnet-w32](../../configs/animal_2d_keypoint/topdown_heatmap/animalpose/td-hm_hrnet-w32_8xb64-210e_animalpose-256x256.py). -2. use faster human bounding box detector, see [MMDetection](https://mmdetection.readthedocs.io/en/latest/model_zoo.html). +1. set `model.test_cfg.flip_test=False` in [animalpose_hrnet-w32](../../configs/animal_2d_keypoint/topdown_heatmap/animalpose/td-hm_hrnet-w32_8xb64-210e_animalpose-256x256.py#85). +2. use faster human bounding box detector, see [MMDetection](https://mmdetection.readthedocs.io/en/3.x/model_zoo.html). diff --git a/demo/docs/2d_face_demo.md b/demo/docs/2d_face_demo.md index a7965a5d05..d72d33307b 100644 --- a/demo/docs/2d_face_demo.md +++ b/demo/docs/2d_face_demo.md @@ -1,26 +1,29 @@ ## 2D Face Keypoint Demo -
+We provide a demo script to test a single image or video with face detectors and top-down pose estimators, Please install `face_recognition` before running the demo, by: -We provide a demo script to test a single image or video with top-down pose estimators and face detectors.Please install `face_recognition` before running the demo, by `pip install face_recognition`. For more details, please refer to https://github.com/ageitgey/face_recognition. +``` +pip install face_recognition +``` + +For more details, please refer to [face_recognition](https://github.com/ageitgey/face_recognition). ### 2D Face Image Demo ```shell python demo/topdown_face_demo.py \ ${MMPOSE_CONFIG_FILE} ${MMPOSE_CHECKPOINT_FILE} \ - --img-root ${IMG_ROOT} --json-file ${JSON_FILE} \ --input ${INPUT_PATH} [--output-root ${OUTPUT_DIR}] \ [--show] [--device ${GPU_ID or CPU}] \ [--draw-heatmap ${DRAW_HEATMAP}] [--radius ${KPT_RADIUS}] \ [--kpt-thr ${KPT_SCORE_THR}] ``` -The pre-trained face keypoint estimation model can be found from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/face_2d_keypoint.html). +The pre-trained face keypoint estimation models can be found from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/face_2d_keypoint.html). Take [aflw model](https://download.openmmlab.com/mmpose/face/hrnetv2/hrnetv2_w18_aflw_256x256-f2bbc62b_20210125.pth) as an example: ```shell -python demo/top_down_img_demo.py \ +python demo/topdown_face_demo.py \ configs/face_2d_keypoint/topdown_heatmap/aflw/td-hm_hrnetv2-w18_8xb64-60e_aflw-256x256.py \ https://download.openmmlab.com/mmpose/face/hrnetv2/hrnetv2_w18_aflw_256x256-f2bbc62b_20210125.pth \ --input tests/data/cofw/001766.jpg \ @@ -29,14 +32,14 @@ python demo/top_down_img_demo.py \ Visualization result: -
+
If you use a heatmap-based model and set argument `--draw-heatmap`, the predicted heatmap will be visualized together with the keypoints. To save visualized results on disk: ```shell -python demo/top_down_img_demo.py \ +python demo/topdown_face_demo.py \ configs/face_2d_keypoint/topdown_heatmap/aflw/td-hm_hrnetv2-w18_8xb64-60e_aflw-256x256.py \ https://download.openmmlab.com/mmpose/face/hrnetv2/hrnetv2_w18_aflw_256x256-f2bbc62b_20210125.pth \ --input tests/data/cofw/001766.jpg \ @@ -46,7 +49,7 @@ python demo/top_down_img_demo.py \ To run demos on CPU: ```shell -python demo/top_down_img_demo.py \ +python demo/topdown_face_demo.py \ configs/face_2d_keypoint/topdown_heatmap/aflw/td-hm_hrnetv2-w18_8xb64-60e_aflw-256x256.py \ https://download.openmmlab.com/mmpose/face/hrnetv2/hrnetv2_w18_aflw_256x256-f2bbc62b_20210125.pth \ --input tests/data/cofw/001766.jpg \ @@ -55,20 +58,20 @@ python demo/top_down_img_demo.py \ ### 2D Face Video Demo -Videos share same interface with images. The difference is, the `${INPUT_PATH}` for videos can be the local path or **URL** link to video file. +Videos share the same interface with images. The difference is that the `${INPUT_PATH}` for videos can be the local path or **URL** link to video file. ```shell -python demo/top_down_img_demo.py \ +python demo/topdown_face_demo.py \ configs/face_2d_keypoint/topdown_heatmap/aflw/td-hm_hrnetv2-w18_8xb64-60e_aflw-256x256.py \ https://download.openmmlab.com/mmpose/face/hrnetv2/hrnetv2_w18_aflw_256x256-f2bbc62b_20210125.pth \ --input demo/resources/ \ --show --draw-heatmap --output-root vis_results ``` -
+
The original video can be downloaded from [Google Drive](https://drive.google.com/file/d/1kQt80t6w802b_vgVcmiV_QfcSJ3RWzmb/view?usp=sharing). ### Speed Up Inference -For 2D face keypoint estimation models, try to edit the config file. For example, set `model.test_cfg.flip_test=False` in [aflw_hrnetv2](../../configs/face_2d_keypoint/topdown_heatmap/aflw/td-hm_hrnetv2-w18_8xb64-60e_aflw-256x256.py). +For 2D face keypoint estimation models, try to edit the config file. For example, set `model.test_cfg.flip_test=False` in [aflw_hrnetv2](../../configs/face_2d_keypoint/topdown_heatmap/aflw/td-hm_hrnetv2-w18_8xb64-60e_aflw-256x256.py#90). diff --git a/demo/docs/2d_hand_demo.md b/demo/docs/2d_hand_demo.md index 9b82912f0d..fe1ba18692 100644 --- a/demo/docs/2d_hand_demo.md +++ b/demo/docs/2d_hand_demo.md @@ -1,10 +1,8 @@ ## 2D Hand Keypoint Demo -
+We provide a demo script to test a single image or video with hand detectors and top-down pose estimators. Assume that you have already installed [mmdet](https://github.com/open-mmlab/mmdetection) with version >= 3.0. -We provide a demo script to test a single image or video with top-down pose estimators and hand detectors. Assume that you have already installed [mmdet](https://github.com/open-mmlab/mmdetection) with version >= 3.0. - -*Hand Box Model Preparation:* The pre-trained hand box estimation model can be found in [det model zoo](/demo/docs/mmdet_modelzoo.md). +**Hand Box Model Preparation:** The pre-trained hand box estimation model can be found in [mmdet model zoo](/demo/docs/mmdet_modelzoo.md). ### 2D Hand Image Demo @@ -26,7 +24,7 @@ Take [onehand10k model](https://download.openmmlab.com/mmpose/hand/hrnetv2/hrnet python demo/topdown_demo_with_mmdet.py \ demo/mmdetection_cfg/cascade_rcnn_x101_64x4d_fpn_1class.py \ https://download.openmmlab.com/mmpose/mmdet_pretrained/cascade_rcnn_x101_64x4d_fpn_20e_onehand10k-dac19597_20201030.pth \ - configs/hand_2d_keypoint/topdown_heatmap/onehand10k/td-hm_hrnetv2-w18_8xb32-210e_onehand10k-256x256.py \ + configs/hand_2d_keypoint/topdown_heatmap/onehand10k/td-hm_hrnetv2-w18_8xb64-210e_onehand10k-256x256.py \ https://download.openmmlab.com/mmpose/hand/hrnetv2/hrnetv2_w18_onehand10k_256x256-30bc9c6b_20210330.pth \ --input tests/data/onehand10k/9.jpg \ --show --draw-heatmap @@ -44,10 +42,10 @@ To save visualized results on disk: python demo/topdown_demo_with_mmdet.py \ demo/mmdetection_cfg/cascade_rcnn_x101_64x4d_fpn_1class.py \ https://download.openmmlab.com/mmpose/mmdet_pretrained/cascade_rcnn_x101_64x4d_fpn_20e_onehand10k-dac19597_20201030.pth \ - configs/hand_2d_keypoint/topdown_heatmap/onehand10k/td-hm_hrnetv2-w18_8xb32-210e_onehand10k-256x256.py \ + configs/hand_2d_keypoint/topdown_heatmap/onehand10k/td-hm_hrnetv2-w18_8xb64-210e_onehand10k-256x256.py \ https://download.openmmlab.com/mmpose/hand/hrnetv2/hrnetv2_w18_onehand10k_256x256-30bc9c6b_20210330.pth \ --input tests/data/onehand10k/9.jpg \ - --output-root vis_results --draw-heatmap + --output-root vis_results --show --draw-heatmap ``` To run demos on CPU: @@ -56,7 +54,7 @@ To run demos on CPU: python demo/topdown_demo_with_mmdet.py \ demo/mmdetection_cfg/cascade_rcnn_x101_64x4d_fpn_1class.py \ https://download.openmmlab.com/mmpose/mmdet_pretrained/cascade_rcnn_x101_64x4d_fpn_20e_onehand10k-dac19597_20201030.pth \ - configs/hand_2d_keypoint/topdown_heatmap/onehand10k/td-hm_hrnetv2-w18_8xb32-210e_onehand10k-256x256.py \ + configs/hand_2d_keypoint/topdown_heatmap/onehand10k/td-hm_hrnetv2-w18_8xb64-210e_onehand10k-256x256.py \ https://download.openmmlab.com/mmpose/hand/hrnetv2/hrnetv2_w18_onehand10k_256x256-30bc9c6b_20210330.pth \ --input tests/data/onehand10k/9.jpg \ --show --draw-heatmap --device cpu @@ -64,16 +62,16 @@ python demo/topdown_demo_with_mmdet.py \ ### 2D Hand Keypoints Video Demo -Videos share same interface with images. The difference is, the `${INPUT_PATH}` for videos can be the local path or **URL** link to video file. +Videos share the same interface with images. The difference is that the `${INPUT_PATH}` for videos can be the local path or **URL** link to video file. ```shell python demo/topdown_demo_with_mmdet.py \ demo/mmdetection_cfg/cascade_rcnn_x101_64x4d_fpn_1class.py \ https://download.openmmlab.com/mmpose/mmdet_pretrained/cascade_rcnn_x101_64x4d_fpn_20e_onehand10k-dac19597_20201030.pth \ - configs/hand_2d_keypoint/topdown_heatmap/onehand10k/td-hm_hrnetv2-w18_8xb32-210e_onehand10k-256x256.py \ + configs/hand_2d_keypoint/topdown_heatmap/onehand10k/td-hm_hrnetv2-w18_8xb64-210e_onehand10k-256x256.py \ https://download.openmmlab.com/mmpose/hand/hrnetv2/hrnetv2_w18_onehand10k_256x256-30bc9c6b_20210330.pth \ --input demo/resources/ \ - --output-root vis_results --draw-heatmap + --output-root vis_results --show --draw-heatmap ```
@@ -82,4 +80,4 @@ The original video can be downloaded from [Github](https://raw.githubusercontent ### Speed Up Inference -For 2D hand keypoint estimation models, try to edit the config file. For example, set `model.test_cfg.flip_test=False` in [onehand10k_hrnetv2](../../configs/hand_2d_keypoint/topdown_heatmap/onehand10k/td-hm_hrnetv2-w18_8xb32-210e_onehand10k-256x256.py). +For 2D hand keypoint estimation models, try to edit the config file. For example, set `model.test_cfg.flip_test=False` in [onehand10k_hrnetv2](../../configs/hand_2d_keypoint/topdown_heatmap/onehand10k/td-hm_hrnetv2-w18_8xb64-210e_onehand10k-256x256.py#90). diff --git a/demo/docs/2d_human_pose_demo.md b/demo/docs/2d_human_pose_demo.md index e47d51ef4e..60f3c80bce 100644 --- a/demo/docs/2d_human_pose_demo.md +++ b/demo/docs/2d_human_pose_demo.md @@ -1,6 +1,6 @@ ## 2D Human Pose Demo -
+We provide demo scripts to perform human pose estimation on images or videos. ### 2D Human Pose Top-Down Image Demo @@ -18,7 +18,7 @@ python demo/image_demo.py \ If you use a heatmap-based model and set argument `--draw-heatmap`, the predicted heatmap will be visualized together with the keypoints. -The pre-trained hand pose estimation model can be downloaded from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/body_2d_keypoint.html). +The pre-trained human pose estimation models can be downloaded from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/body_2d_keypoint.html). Take [coco model](https://download.openmmlab.com/mmpose/top_down/hrnet/hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth) as an example: ```shell @@ -28,7 +28,6 @@ python demo/image_demo.py \ https://download.openmmlab.com/mmpose/top_down/hrnet/hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth \ --out-file vis_results.jpg \ --draw-heatmap - ``` To run this demo on CPU: @@ -63,7 +62,7 @@ python demo/topdown_demo_with_mmdet.py \ [--bbox-thr ${BBOX_SCORE_THR} --kpt-thr ${KPT_SCORE_THR}] ``` -Examples: +Example: ```shell python demo/topdown_demo_with_mmdet.py \ @@ -85,7 +84,7 @@ The above demo script can also take video as input, and run mmdet for human dete Assume that you have already installed [mmdet](https://github.com/open-mmlab/mmdetection) with version >= 3.0. -Examples: +Example: ```shell python demo/topdown_demo_with_mmdet.py \ @@ -94,7 +93,7 @@ python demo/topdown_demo_with_mmdet.py \ configs/body_2d_keypoint/topdown_heatmap/coco/td-hm_hrnet-w32_8xb64-210e_coco-256x192.py \ https://download.openmmlab.com/mmpose/top_down/hrnet/hrnet_w32_coco_256x192-c78dce93_20200708.pth \ --input tests/data/posetrack18/videos/000001_mpiinew_test/000001_mpiinew_test.mp4 \ ---output-root=vis_results/demo --show --draw-heatmap + --output-root=vis_results/demo --show --draw-heatmap ``` ### Speed Up Inference @@ -104,4 +103,4 @@ Some tips to speed up MMPose inference: For top-down models, try to edit the config file. For example, 1. set `model.test_cfg.flip_test=False` in [topdown-res50](/configs/body_2d_keypoint/topdown_heatmap/coco/td-hm_res50_8xb64-210e_coco-256x192.py#L56). -2. use faster human bounding box detector, see [MMDetection](https://mmdetection.readthedocs.io/en/latest/model_zoo.html). +2. use faster human bounding box detector, see [MMDetection](https://mmdetection.readthedocs.io/en/3.x/model_zoo.html). diff --git a/demo/docs/2d_wholebody_pose_demo.md b/demo/docs/2d_wholebody_pose_demo.md index b4d39c0464..8551388172 100644 --- a/demo/docs/2d_wholebody_pose_demo.md +++ b/demo/docs/2d_wholebody_pose_demo.md @@ -1,7 +1,5 @@ ## 2D Human Whole-Body Pose Demo -
- ### 2D Human Whole-Body Pose Top-Down Image Demo #### Use full image as input @@ -16,7 +14,7 @@ python demo/image_demo.py \ [--draw_heatmap] ``` -The pre-trained hand pose estimation model can be downloaded from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/2d_wholebody_keypoint.html). +The pre-trained hand pose estimation models can be downloaded from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/2d_wholebody_keypoint.html). Take [coco-wholebody_vipnas_res50_dark](https://download.openmmlab.com/mmpose/top_down/vipnas/vipnas_res50_wholebody_256x192_dark-67c0ce35_20211112.pth) model as an example: ```shell @@ -25,7 +23,6 @@ python demo/image_demo.py \ configs/wholebody_2d_keypoint/topdown_heatmap/coco-wholebody/td-hm_vipnas-res50_dark-8xb64-210e_coco-wholebody-256x192.py \ https://download.openmmlab.com/mmpose/top_down/vipnas/vipnas_res50_wholebody_256x192_dark-67c0ce35_20211112.pth \ --out-file vis_results.jpg - ``` To run demos on CPU: @@ -85,6 +82,10 @@ python demo/topdown_demo_with_mmdet.py \ --output-root vis_results/ --show ``` +Visualization result: + +
+ ### Speed Up Inference Some tips to speed up MMPose inference: @@ -92,4 +93,4 @@ Some tips to speed up MMPose inference: For top-down models, try to edit the config file. For example, 1. set `model.test_cfg.flip_test=False` in [pose_hrnet_w48_dark+](/configs/wholebody_2d_keypoint/topdown_heatmap/coco-wholebody/td-hm_hrnet-w48_dark-8xb32-210e_coco-wholebody-384x288.py#L90). -2. use faster human bounding box detector, see [MMDetection](https://mmdetection.readthedocs.io/en/latest/model_zoo.html). +2. use faster human bounding box detector, see [MMDetection](https://mmdetection.readthedocs.io/en/3.x/model_zoo.html). diff --git a/demo/docs/mmdet_modelzoo.md b/demo/docs/mmdet_modelzoo.md index 1630eb9baf..d438a5e982 100644 --- a/demo/docs/mmdet_modelzoo.md +++ b/demo/docs/mmdet_modelzoo.md @@ -2,7 +2,7 @@ ### Human Bounding Box Detection Models -For human bounding box detection models, please download from [MMDetection Model Zoo](https://mmdetection.readthedocs.io/en/latest/model_zoo.html). +For human bounding box detection models, please download from [MMDetection Model Zoo](https://mmdetection.readthedocs.io/en/3.x/model_zoo.html). MMDetection provides 80-class COCO-pretrained models, which already includes the `person` category. ### Hand Bounding Box Detection Models @@ -22,7 +22,7 @@ For hand bounding box detection, we simply train our hand box models on onehand1 In COCO dataset, there are 80 object categories, including 10 common `animal` categories (14: 'bird', 15: 'cat', 16: 'dog', 17: 'horse', 18: 'sheep', 19: 'cow', 20: 'elephant', 21: 'bear', 22: 'zebra', 23: 'giraffe') -For animals in the categories, please download from [MMDetection Model Zoo](https://mmdetection.readthedocs.io/en/latest/model_zoo.html). +For animals in the categories, please download from [MMDetection Model Zoo](https://mmdetection.readthedocs.io/en/3.x/model_zoo.html). #### Macaque detection results on MacaquePose test set diff --git a/demo/topdown_face_demo.py b/demo/topdown_face_demo.py index 7730603478..3465f685d3 100644 --- a/demo/topdown_face_demo.py +++ b/demo/topdown_face_demo.py @@ -77,7 +77,7 @@ def visualize_img(args, img_path, pose_estimator, visualizer, show_interval): def main(): """Visualize the demo images. - Using mmdet to detect the human. + Use `face_recognition` to detect the face. """ parser = ArgumentParser() parser.add_argument('pose_config', help='Config file for pose') @@ -151,6 +151,7 @@ def main(): elif input_type == 'video': tmp_folder = tempfile.TemporaryDirectory() video = mmcv.VideoReader(args.input) + progressbar = mmengine.ProgressBar(len(video)) video.cvt2frames(tmp_folder.name, show_progress=False) output_root = args.output_root args.output_root = tmp_folder.name @@ -161,6 +162,7 @@ def main(): pose_estimator, visualizer, show_interval=1) + progressbar.update() if output_root: mmcv.frames2video( tmp_folder.name, diff --git a/docs/en/installation.md b/docs/en/installation.md index 6b6159a690..0b6ad73ab4 100644 --- a/docs/en/installation.md +++ b/docs/en/installation.md @@ -65,6 +65,12 @@ mim install mmengine mim install "mmcv>=2.0.0rc1" ``` +Note that some of the demo scripts in MMPose require [MMDetection](https://github.com/open-mmlab/mmdetection) (mmdet) for human detection. If you want to run these demo scripts with mmdet, you can easily install mmdet as a dependency by running: + +```shell +mim install "mmdet>=3.0.0rc0" +``` + **Step 1.** Install MMPose. Case A: To develop and run mmpose directly, install it from source: @@ -88,7 +94,54 @@ mim install "mmpose>=1.0.0b0" ### Verify the installation -To verify that MMPose is installed correctly, you can run an inference demo according to this [guide](/demo/docs/2d_human_pose_demo.md). +To verify that MMPose is installed correctly, you can run an inference demo with the following steps. + +**Step 1.** We need to download config and checkpoint files. + +```shell +mim download mmpose --config td-hm_hrnet-w48_8xb32-210e_coco-256x192 --dest . +``` + +The downloading will take several seconds or more, depending on your network environment. When it is done, you will find two files `td-hm_hrnet-w48_8xb32-210e_coco-256x192.py` and `hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth` in your current folder. + +**Step 2.** Run the inference demo. + +Option (A). If you install mmpose from source, just run the following command under the folder `$MMPOSE`: + +```shell +python demo/image_demo.py \ + tests/data/coco/000000000785.jpg \ + td-hm_hrnet-w48_8xb32-210e_coco-256x192.py \ + hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth \ + --out-file vis_results.jpg \ + --draw-heatmap +``` + +If everything goes fine, you will get this visualization result: + +![image](https://user-images.githubusercontent.com/87690686/187824033-2cce0f55-034a-4127-82e2-52744178bc32.jpg) + +And the visualization result will be saved as `vis_results.jpg` on your current folder, where the predicted keypoints and heatmaps are plotted on the person in the image. + +Option (B). If you install mmpose with pip, open you python interpreter and copy & paste the following codes. + +```python +from mmpose.apis import inference_topdown, init_model +from mmpose.utils import register_all_modules + +register_all_modules() + +config_file = 'td-hm_hrnet-w48_8xb32-210e_coco-256x192.py' +checkpoint_file = 'hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth' +model = init_model(config_file, checkpoint_file, device='cpu') # or device='cuda:0' + +# please prepare an image with person +results = inference_topdown(model, 'demo.jpg') +``` + +The `demo.jpg` can be downloaded from [Github](https://raw.githubusercontent.com/open-mmlab/mmpose/1.x/tests/data/coco/000000000785.jpg). + +The inference results will be a list of `PoseDataSample`, and the predictions are in the `pred_instances`, indicating the detected keypoint locations and scores. ### Customize Installation diff --git a/docs/en/notes/faq.md b/docs/en/notes/faq.md index 39fa7ec760..e05a695adc 100644 --- a/docs/en/notes/faq.md +++ b/docs/en/notes/faq.md @@ -145,4 +145,4 @@ Compatible MMPose and MMCV versions are shown as below. Please choose the correc A few approaches may help to improve the inference speed: 1. Set `flip_test=False` in `init_cfg` in the config file. - 2. For top-down models, use faster human bounding box detector, see [MMDetection](https://mmdetection.readthedocs.io/en/latest/model_zoo.html). + 2. For top-down models, use faster human bounding box detector, see [MMDetection](https://mmdetection.readthedocs.io/en/3.x/model_zoo.html). diff --git a/docs/en/overview.md b/docs/en/overview.md index 2f442750f6..3733d1637c 100644 --- a/docs/en/overview.md +++ b/docs/en/overview.md @@ -4,7 +4,7 @@ This chapter will introduce you to the overall framework of MMPose and provide l ## What is MMPose -![image](https://user-images.githubusercontent.com/15977946/188659200-e5694ca7-28ff-43e5-ae33-acc1fdff7420.jpg) +![image](https://user-images.githubusercontent.com/26127467/190981395-5ecf0146-f8a7-482f-a87f-b0c64dabf7cb.jpg) MMPose is a Pytorch-based pose estimation open-source toolkit, a member of the [OpenMMLab Project](https://github.com/open-mmlab). It contains a rich set of algorithms for 2d multi-person human pose estimation, 2d hand pose estimation, 2d face landmark detection, 133 keypoint whole-body human pose estimation, fashion landmark detection and animal pose estimation as well as related components and modules, below is its overall framework. diff --git a/docs/en/quick_run.md b/docs/en/quick_run.md index e6dd98d2fd..51aabfc967 100644 --- a/docs/en/quick_run.md +++ b/docs/en/quick_run.md @@ -179,7 +179,7 @@ model = dict( ) ``` -or add `--cfg-options='model.test_cfg.output_heatmaps=True` at the end of your command. +or add `--cfg-options='model.test_cfg.output_heatmaps=True'` at the end of your command. Visualization result (top: decoded keypoints; bottom: predicted heatmap): diff --git a/docs/zh_cn/installation.md b/docs/zh_cn/installation.md index 7faff55448..65e6bcd0bf 100644 --- a/docs/zh_cn/installation.md +++ b/docs/zh_cn/installation.md @@ -64,6 +64,12 @@ mim install mmengine mim install "mmcv>=2.0.0rc1" ``` +请注意,MMPose 中的一些推理示例脚本需要使用 [MMDetection](https://github.com/open-mmlab/mmdetection) (mmdet) 检测人体。如果您想运行这些示例脚本,可以通过运行以下命令安装 mmdet: + +```shell +mim install "mmdet>=3.0.0rc0" +``` + **第 2 步** 安装 MMPose 根据具体需求,我们支持两种安装模式: 从源码安装(推荐)和作为 Python 包安装 @@ -92,7 +98,53 @@ mim install "mmpose>=1.0.0b0" ### 验证安装 -为了验证 MMPose 的安装是否正确,您可以运行我们提供的 [示例代码](/demo/docs/2d_human_pose_demo.md) 来执行模型推理。 +为了验证 MMPose 是否安装正确,您可以通过以下步骤运行模型推理。 + +**第 1 步** 我们需要下载配置文件和模型权重文件 + +```shell +mim download mmpose --config td-hm_hrnet-w48_8xb32-210e_coco-256x192 --dest . +``` + +下载过程往往需要几秒或更多的时间,这取决于您的网络环境。完成之后,您会在当前目录下找到这两个文件:`td-hm_hrnet-w48_8xb32-210e_coco-256x192.py` 和 `hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth`, 分别是配置文件和对应的模型权重文件。 + +**第 2 步** 验证推理示例 + +如果您是**从源码安装**的 mmpose,可以直接运行以下命令进行验证: + +```shell +python demo/image_demo.py \ + tests/data/coco/000000000785.jpg \ + td-hm_hrnet-w48_8xb32-210e_coco-256x192.py \ + hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth \ + --out-file vis_results.jpg \ + --draw-heatmap +``` + +如果一切顺利,您将会得到这样的可视化结果: + +![image](https://user-images.githubusercontent.com/87690686/187824033-2cce0f55-034a-4127-82e2-52744178bc32.jpg) + +代码会将预测的关键点和热图绘制在图像中的人体上,并保存到当前文件夹下的 `vis_results.jpg`。 + +如果您是**作为 Python 包安装**,可以打开您的 Python 解释器,复制并粘贴如下代码: + +```python +from mmpose.apis import inference_topdown, init_model +from mmpose.utils import register_all_modules + +register_all_modules() + +config_file = 'td-hm_hrnet-w48_8xb32-210e_coco-256x192.py' +checkpoint_file = 'hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth' +model = init_model(config_file, checkpoint_file, device='cpu') # or device='cuda:0' + +# 请准备好一张带有人体的图片 +results = inference_topdown(model, 'demo.jpg') +``` + +示例图片 `demo.jpg` 可以从 [Github](https://raw.githubusercontent.com/open-mmlab/mmpose/1.x/tests/data/coco/000000000785.jpg) 下载。 +推理结果是一个 `PoseDataSample` 列表,预测结果将会保存在 `pred_instances` 中,包括检测到的关键点位置和置信度。 ### 自定义安装 diff --git a/docs/zh_cn/notes/faq.md b/docs/zh_cn/notes/faq.md index 005774e2b5..57461bb249 100644 --- a/docs/zh_cn/notes/faq.md +++ b/docs/zh_cn/notes/faq.md @@ -134,4 +134,4 @@ Compatible MMPose and MMCV versions are shown as below. Please choose the correc For top-down models, try to edit the config file. For example, 1. set `flip_test=False` in `init_cfg` in the config file. - 2. use faster human bounding box detector, see [MMDetection](https://mmdetection.readthedocs.io/en/latest/model_zoo.html). + 2. use faster human bounding box detector, see [MMDetection](https://mmdetection.readthedocs.io/zh_CN/3.x/model_zoo.html).