Skip to content

Commit

Permalink
[Doc] Refine installation and fix some docs (#1659)
Browse files Browse the repository at this point in the history
* [Doc] Refine installation docs

* modify installation verification

* fix links in readme

* fix typos in demo

* fix demo results

* fix demo image

* refine some docs

* update links

* fix overview image

Co-authored-by: lupeng <penglu2097@gmail.com>
  • Loading branch information
2 people authored and ly015 committed Oct 14, 2022
1 parent 7a87210 commit 7e06cbb
Show file tree
Hide file tree
Showing 15 changed files with 267 additions and 159 deletions.
30 changes: 15 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,10 +27,10 @@
[![Percentage of issues still open](https://isitmaintained.com/badge/open/open-mmlab/mmpose.svg)](https://github.com/open-mmlab/mmpose/issues)

[📘Documentation](https://mmpose.readthedocs.io/en/1.x/) |
[🛠️Installation](https://mmpose.readthedocs.io/en/1.x/install.html) |
[👀Model Zoo](https://mmpose.readthedocs.io/en/1.x/modelzoo.html) |
[📜Papers](https://mmpose.readthedocs.io/en/1.x/papers/algorithms.html) |
[🆕Update News](https://mmpose.readthedocs.io/en/1.x/changelog.html) |
[🛠️Installation](https://mmpose.readthedocs.io/en/1.x/installation.html) |
[👀Model Zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo.html) |
[📜Papers](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/algorithms.html) |
[🆕Update News](https://mmpose.readthedocs.io/en/1.x/notes/changelog.html) |
[🤔Reporting Issues](https://github.com/open-mmlab/mmpose/issues/new/choose)

</div>
Expand All @@ -52,17 +52,17 @@ https://user-images.githubusercontent.com/15977946/124654387-0fd3c500-ded1-11eb-
- **Support diverse tasks**

We support a wide spectrum of mainstream pose analysis tasks in current research community, including 2d multi-person human pose estimation, 2d hand pose estimation, 2d face landmark detection, 133 keypoint whole-body human pose estimation, 3d human mesh recovery, fashion landmark detection and animal pose estimation.
See [demo.md](demo/README.md) for more information.
See [Demo](demo/docs/) for more information.

- **Higher efficiency and higher accuracy**

MMPose implements multiple state-of-the-art (SOTA) deep learning models, including both top-down & bottom-up approaches. We achieve faster training speed and higher accuracy than other popular codebases, such as [HRNet](https://github.com/leoxiaobin/deep-high-resolution-net.pytorch).
See [benchmark.md](docs/en/benchmark.md) for more information.
See [benchmark.md](docs/en/notes/benchmark.md) for more information.

- **Support for various datasets**

The toolbox directly supports multiple popular and representative datasets, COCO, AIC, MPII, MPII-TRB, OCHuman etc.
See [data_preparation.md](docs/en/data_preparation.md) for more information.
See [dataset_zoo](docs/en/dataset_zoo) for more information.

- **Well designed, tested and documented**

Expand Down Expand Up @@ -99,17 +99,17 @@ Please refer to [installation.md](https://mmpose.readthedocs.io/en/1.x/installat

We provided a series of tutorials about the basic usage of MMPose for new users:

- [About Configs](https://mmpose.readthedocs.io/en/1.x/user_guides/configs.md)
- [Add New Dataset](https://mmpose.readthedocs.io/en/1.x/user_guides/prepare_datasets.md)
- [Keypoint Encoding & Decoding](https://mmpose.readthedocs.io/en/1.x/user_guides/codecs.md)
- [Inference with Existing Models](https://mmpose.readthedocs.io/en/1.x/user_guides/inference.md)
- [Train and Test](https://mmpose.readthedocs.io/en/1.x/user_guides/train_and_test.md)
- [Visualization Tools](https://mmpose.readthedocs.io/en/1.x/user_guides/visualization.md)
- [Other Useful Tools](https://mmpose.readthedocs.io/en/1.x/user_guides/useful_tools.md)
- [About Configs](https://mmpose.readthedocs.io/en/1.x/user_guides/configs.html)
- [Add New Dataset](https://mmpose.readthedocs.io/en/1.x/user_guides/prepare_datasets.html)
- [Keypoint Encoding & Decoding](https://mmpose.readthedocs.io/en/1.x/user_guides/codecs.html)
- [Inference with Existing Models](https://mmpose.readthedocs.io/en/1.x/user_guides/inference.html)
- [Train and Test](https://mmpose.readthedocs.io/en/1.x/user_guides/train_and_test.html)
- [Visualization Tools](https://mmpose.readthedocs.io/en/1.x/user_guides/visualization.html)
- [Other Useful Tools](https://mmpose.readthedocs.io/en/1.x/user_guides/useful_tools.html)

## Model Zoo

Results and models are available in the *README.md* of each method's config directory.
Results and models are available in the **README.md** of each method's config directory.
A summary can be found in the [Model Zoo](https://mmpose.readthedocs.io/en/1.x/modelzoo.html) page.

<details open>
Expand Down
188 changes: 94 additions & 94 deletions README_CN.md

Large diffs are not rendered by default.

10 changes: 5 additions & 5 deletions demo/docs/2d_animal_demo.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ python demo/topdown_demo_with_mmdet.py \
[--device ${GPU_ID or CPU}]
```

The pre-trained animal pose estimation model can be found from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/animal.html).
The pre-trained animal pose estimation model can be found from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/animal_2d_keypoint.html).
Take [animalpose model](https://download.openmmlab.com/mmpose/animal/hrnet/hrnet_w32_animalpose_256x256-1aa7f075_20210426.pth) as an example:

```shell
Expand All @@ -39,7 +39,7 @@ The augement `--det-cat-id=15` selected detected bounding boxes with label 'cat'
**COCO-animals**
In COCO dataset, there are 80 object categories, including 10 common `animal` categories (14: 'bird', 15: 'cat', 16: 'dog', 17: 'horse', 18: 'sheep', 19: 'cow', 20: 'elephant', 21: 'bear', 22: 'zebra', 23: 'giraffe').

For other animals, we have also provided some pre-trained animal detection models (1-class models). Supported models can be found in [det model zoo](/demo/docs/mmdet_modelzoo.md).
For other animals, we have also provided some pre-trained animal detection models (1-class models). Supported models can be found in [detection model zoo](/demo/docs/mmdet_modelzoo.md).

To save visualized results on disk:

Expand Down Expand Up @@ -67,7 +67,7 @@ python demo/topdown_demo_with_mmdet.py \

### 2D Animal Pose Video Demo

Videos share same interface with images. The difference is, the `${INPUT_PATH}` for videos can be the local path or **URL** link to video file.
Videos share the same interface with images. The difference is that the `${INPUT_PATH}` for videos can be the local path or **URL** link to video file.

For example,

Expand All @@ -89,5 +89,5 @@ The original video can be downloaded from [Google Drive](https://drive.google.co

Some tips to speed up MMPose inference:

1. set `model.test_cfg.flip_test=False` in [animalpose_hrnet-w32](../../configs/animal_2d_keypoint/topdown_heatmap/animalpose/td-hm_hrnet-w32_8xb64-210e_animalpose-256x256.py).
2. use faster human bounding box detector, see [MMDetection](https://mmdetection.readthedocs.io/en/latest/model_zoo.html).
1. set `model.test_cfg.flip_test=False` in [animalpose_hrnet-w32](../../configs/animal_2d_keypoint/topdown_heatmap/animalpose/td-hm_hrnet-w32_8xb64-210e_animalpose-256x256.py#85).
2. use faster human bounding box detector, see [MMDetection](https://mmdetection.readthedocs.io/en/3.x/model_zoo.html).
27 changes: 15 additions & 12 deletions demo/docs/2d_face_demo.md
Original file line number Diff line number Diff line change
@@ -1,26 +1,29 @@
## 2D Face Keypoint Demo

<img src="https://user-images.githubusercontent.com/11788150/109144943-ccd44900-779c-11eb-9e9d-8682e7629654.gif" width="600px" alt><br>
We provide a demo script to test a single image or video with face detectors and top-down pose estimators, Please install `face_recognition` before running the demo, by:

We provide a demo script to test a single image or video with top-down pose estimators and face detectors.Please install `face_recognition` before running the demo, by `pip install face_recognition`. For more details, please refer to https://github.com/ageitgey/face_recognition.
```
pip install face_recognition
```

For more details, please refer to [face_recognition](https://github.com/ageitgey/face_recognition).

### 2D Face Image Demo

```shell
python demo/topdown_face_demo.py \
${MMPOSE_CONFIG_FILE} ${MMPOSE_CHECKPOINT_FILE} \
--img-root ${IMG_ROOT} --json-file ${JSON_FILE} \
--input ${INPUT_PATH} [--output-root ${OUTPUT_DIR}] \
[--show] [--device ${GPU_ID or CPU}] \
[--draw-heatmap ${DRAW_HEATMAP}] [--radius ${KPT_RADIUS}] \
[--kpt-thr ${KPT_SCORE_THR}]
```

The pre-trained face keypoint estimation model can be found from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/face_2d_keypoint.html).
The pre-trained face keypoint estimation models can be found from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/face_2d_keypoint.html).
Take [aflw model](https://download.openmmlab.com/mmpose/face/hrnetv2/hrnetv2_w18_aflw_256x256-f2bbc62b_20210125.pth) as an example:

```shell
python demo/top_down_img_demo.py \
python demo/topdown_face_demo.py \
configs/face_2d_keypoint/topdown_heatmap/aflw/td-hm_hrnetv2-w18_8xb64-60e_aflw-256x256.py \
https://download.openmmlab.com/mmpose/face/hrnetv2/hrnetv2_w18_aflw_256x256-f2bbc62b_20210125.pth \
--input tests/data/cofw/001766.jpg \
Expand All @@ -29,14 +32,14 @@ python demo/top_down_img_demo.py \

Visualization result:

<img src="https://user-images.githubusercontent.com/26127467/187676149-97b36b55-94f3-4c2a-b831-c6d1b839f029.jpg" height="500px" alt><br>
<img src="https://user-images.githubusercontent.com/87690686/190857851-8d5afe60-fadf-4aa8-9a1c-5b32aaec7c79.jpg" height="500px" alt><br>

If you use a heatmap-based model and set argument `--draw-heatmap`, the predicted heatmap will be visualized together with the keypoints.

To save visualized results on disk:

```shell
python demo/top_down_img_demo.py \
python demo/topdown_face_demo.py \
configs/face_2d_keypoint/topdown_heatmap/aflw/td-hm_hrnetv2-w18_8xb64-60e_aflw-256x256.py \
https://download.openmmlab.com/mmpose/face/hrnetv2/hrnetv2_w18_aflw_256x256-f2bbc62b_20210125.pth \
--input tests/data/cofw/001766.jpg \
Expand All @@ -46,7 +49,7 @@ python demo/top_down_img_demo.py \
To run demos on CPU:

```shell
python demo/top_down_img_demo.py \
python demo/topdown_face_demo.py \
configs/face_2d_keypoint/topdown_heatmap/aflw/td-hm_hrnetv2-w18_8xb64-60e_aflw-256x256.py \
https://download.openmmlab.com/mmpose/face/hrnetv2/hrnetv2_w18_aflw_256x256-f2bbc62b_20210125.pth \
--input tests/data/cofw/001766.jpg \
Expand All @@ -55,20 +58,20 @@ python demo/top_down_img_demo.py \

### 2D Face Video Demo

Videos share same interface with images. The difference is, the `${INPUT_PATH}` for videos can be the local path or **URL** link to video file.
Videos share the same interface with images. The difference is that the `${INPUT_PATH}` for videos can be the local path or **URL** link to video file.

```shell
python demo/top_down_img_demo.py \
python demo/topdown_face_demo.py \
configs/face_2d_keypoint/topdown_heatmap/aflw/td-hm_hrnetv2-w18_8xb64-60e_aflw-256x256.py \
https://download.openmmlab.com/mmpose/face/hrnetv2/hrnetv2_w18_aflw_256x256-f2bbc62b_20210125.pth \
--input demo/resources/<demo_face.mp4> \
--show --draw-heatmap --output-root vis_results
```

<img src="https://user-images.githubusercontent.com/26127467/187677697-ba44030e-e4d8-4cc4-b112-39d94bb3ef45.gif" height="500px" alt><br>
<img src="https://user-images.githubusercontent.com/87690686/190858159-b224b06a-7d34-4716-a8bc-4d127a39b90c.gif" height="500px" alt><br>

The original video can be downloaded from [Google Drive](https://drive.google.com/file/d/1kQt80t6w802b_vgVcmiV_QfcSJ3RWzmb/view?usp=sharing).

### Speed Up Inference

For 2D face keypoint estimation models, try to edit the config file. For example, set `model.test_cfg.flip_test=False` in [aflw_hrnetv2](../../configs/face_2d_keypoint/topdown_heatmap/aflw/td-hm_hrnetv2-w18_8xb64-60e_aflw-256x256.py).
For 2D face keypoint estimation models, try to edit the config file. For example, set `model.test_cfg.flip_test=False` in [aflw_hrnetv2](../../configs/face_2d_keypoint/topdown_heatmap/aflw/td-hm_hrnetv2-w18_8xb64-60e_aflw-256x256.py#90).
22 changes: 10 additions & 12 deletions demo/docs/2d_hand_demo.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,8 @@
## 2D Hand Keypoint Demo

<img src="https://user-images.githubusercontent.com/11788150/109098558-8c54db00-775c-11eb-8966-85df96b23dc5.gif" width="600px" alt><br>
We provide a demo script to test a single image or video with hand detectors and top-down pose estimators. Assume that you have already installed [mmdet](https://github.com/open-mmlab/mmdetection) with version >= 3.0.

We provide a demo script to test a single image or video with top-down pose estimators and hand detectors. Assume that you have already installed [mmdet](https://github.com/open-mmlab/mmdetection) with version >= 3.0.

*Hand Box Model Preparation:* The pre-trained hand box estimation model can be found in [det model zoo](/demo/docs/mmdet_modelzoo.md).
**Hand Box Model Preparation:** The pre-trained hand box estimation model can be found in [mmdet model zoo](/demo/docs/mmdet_modelzoo.md).

### 2D Hand Image Demo

Expand All @@ -26,7 +24,7 @@ Take [onehand10k model](https://download.openmmlab.com/mmpose/hand/hrnetv2/hrnet
python demo/topdown_demo_with_mmdet.py \
demo/mmdetection_cfg/cascade_rcnn_x101_64x4d_fpn_1class.py \
https://download.openmmlab.com/mmpose/mmdet_pretrained/cascade_rcnn_x101_64x4d_fpn_20e_onehand10k-dac19597_20201030.pth \
configs/hand_2d_keypoint/topdown_heatmap/onehand10k/td-hm_hrnetv2-w18_8xb32-210e_onehand10k-256x256.py \
configs/hand_2d_keypoint/topdown_heatmap/onehand10k/td-hm_hrnetv2-w18_8xb64-210e_onehand10k-256x256.py \
https://download.openmmlab.com/mmpose/hand/hrnetv2/hrnetv2_w18_onehand10k_256x256-30bc9c6b_20210330.pth \
--input tests/data/onehand10k/9.jpg \
--show --draw-heatmap
Expand All @@ -44,10 +42,10 @@ To save visualized results on disk:
python demo/topdown_demo_with_mmdet.py \
demo/mmdetection_cfg/cascade_rcnn_x101_64x4d_fpn_1class.py \
https://download.openmmlab.com/mmpose/mmdet_pretrained/cascade_rcnn_x101_64x4d_fpn_20e_onehand10k-dac19597_20201030.pth \
configs/hand_2d_keypoint/topdown_heatmap/onehand10k/td-hm_hrnetv2-w18_8xb32-210e_onehand10k-256x256.py \
configs/hand_2d_keypoint/topdown_heatmap/onehand10k/td-hm_hrnetv2-w18_8xb64-210e_onehand10k-256x256.py \
https://download.openmmlab.com/mmpose/hand/hrnetv2/hrnetv2_w18_onehand10k_256x256-30bc9c6b_20210330.pth \
--input tests/data/onehand10k/9.jpg \
--output-root vis_results --draw-heatmap
--output-root vis_results --show --draw-heatmap
```

To run demos on CPU:
Expand All @@ -56,24 +54,24 @@ To run demos on CPU:
python demo/topdown_demo_with_mmdet.py \
demo/mmdetection_cfg/cascade_rcnn_x101_64x4d_fpn_1class.py \
https://download.openmmlab.com/mmpose/mmdet_pretrained/cascade_rcnn_x101_64x4d_fpn_20e_onehand10k-dac19597_20201030.pth \
configs/hand_2d_keypoint/topdown_heatmap/onehand10k/td-hm_hrnetv2-w18_8xb32-210e_onehand10k-256x256.py \
configs/hand_2d_keypoint/topdown_heatmap/onehand10k/td-hm_hrnetv2-w18_8xb64-210e_onehand10k-256x256.py \
https://download.openmmlab.com/mmpose/hand/hrnetv2/hrnetv2_w18_onehand10k_256x256-30bc9c6b_20210330.pth \
--input tests/data/onehand10k/9.jpg \
--show --draw-heatmap --device cpu
```

### 2D Hand Keypoints Video Demo

Videos share same interface with images. The difference is, the `${INPUT_PATH}` for videos can be the local path or **URL** link to video file.
Videos share the same interface with images. The difference is that the `${INPUT_PATH}` for videos can be the local path or **URL** link to video file.

```shell
python demo/topdown_demo_with_mmdet.py \
demo/mmdetection_cfg/cascade_rcnn_x101_64x4d_fpn_1class.py \
https://download.openmmlab.com/mmpose/mmdet_pretrained/cascade_rcnn_x101_64x4d_fpn_20e_onehand10k-dac19597_20201030.pth \
configs/hand_2d_keypoint/topdown_heatmap/onehand10k/td-hm_hrnetv2-w18_8xb32-210e_onehand10k-256x256.py \
configs/hand_2d_keypoint/topdown_heatmap/onehand10k/td-hm_hrnetv2-w18_8xb64-210e_onehand10k-256x256.py \
https://download.openmmlab.com/mmpose/hand/hrnetv2/hrnetv2_w18_onehand10k_256x256-30bc9c6b_20210330.pth \
--input demo/resources/<demo_hand.mp4> \
--output-root vis_results --draw-heatmap
--output-root vis_results --show --draw-heatmap
```

<img src="https://user-images.githubusercontent.com/26127467/187665873-3ac836ec-8da5-45e1-8d78-c0abe962bd5e.gif" height="500px" alt><br>
Expand All @@ -82,4 +80,4 @@ The original video can be downloaded from [Github](https://raw.githubusercontent

### Speed Up Inference

For 2D hand keypoint estimation models, try to edit the config file. For example, set `model.test_cfg.flip_test=False` in [onehand10k_hrnetv2](../../configs/hand_2d_keypoint/topdown_heatmap/onehand10k/td-hm_hrnetv2-w18_8xb32-210e_onehand10k-256x256.py).
For 2D hand keypoint estimation models, try to edit the config file. For example, set `model.test_cfg.flip_test=False` in [onehand10k_hrnetv2](../../configs/hand_2d_keypoint/topdown_heatmap/onehand10k/td-hm_hrnetv2-w18_8xb64-210e_onehand10k-256x256.py#90).
13 changes: 6 additions & 7 deletions demo/docs/2d_human_pose_demo.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
## 2D Human Pose Demo

<img src="https://raw.githubusercontent.com/open-mmlab/mmpose/master/demo/resources/demo_coco.gif" width="600px" alt><br>
We provide demo scripts to perform human pose estimation on images or videos.

### 2D Human Pose Top-Down Image Demo

Expand All @@ -18,7 +18,7 @@ python demo/image_demo.py \

If you use a heatmap-based model and set argument `--draw-heatmap`, the predicted heatmap will be visualized together with the keypoints.

The pre-trained hand pose estimation model can be downloaded from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/body_2d_keypoint.html).
The pre-trained human pose estimation models can be downloaded from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/body_2d_keypoint.html).
Take [coco model](https://download.openmmlab.com/mmpose/top_down/hrnet/hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth) as an example:

```shell
Expand All @@ -28,7 +28,6 @@ python demo/image_demo.py \
https://download.openmmlab.com/mmpose/top_down/hrnet/hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth \
--out-file vis_results.jpg \
--draw-heatmap

```

To run this demo on CPU:
Expand Down Expand Up @@ -63,7 +62,7 @@ python demo/topdown_demo_with_mmdet.py \
[--bbox-thr ${BBOX_SCORE_THR} --kpt-thr ${KPT_SCORE_THR}]
```

Examples:
Example:

```shell
python demo/topdown_demo_with_mmdet.py \
Expand All @@ -85,7 +84,7 @@ The above demo script can also take video as input, and run mmdet for human dete

Assume that you have already installed [mmdet](https://github.com/open-mmlab/mmdetection) with version >= 3.0.

Examples:
Example:

```shell
python demo/topdown_demo_with_mmdet.py \
Expand All @@ -94,7 +93,7 @@ python demo/topdown_demo_with_mmdet.py \
configs/body_2d_keypoint/topdown_heatmap/coco/td-hm_hrnet-w32_8xb64-210e_coco-256x192.py \
https://download.openmmlab.com/mmpose/top_down/hrnet/hrnet_w32_coco_256x192-c78dce93_20200708.pth \
--input tests/data/posetrack18/videos/000001_mpiinew_test/000001_mpiinew_test.mp4 \
--output-root=vis_results/demo --show --draw-heatmap
--output-root=vis_results/demo --show --draw-heatmap
```

### Speed Up Inference
Expand All @@ -104,4 +103,4 @@ Some tips to speed up MMPose inference:
For top-down models, try to edit the config file. For example,

1. set `model.test_cfg.flip_test=False` in [topdown-res50](/configs/body_2d_keypoint/topdown_heatmap/coco/td-hm_res50_8xb64-210e_coco-256x192.py#L56).
2. use faster human bounding box detector, see [MMDetection](https://mmdetection.readthedocs.io/en/latest/model_zoo.html).
2. use faster human bounding box detector, see [MMDetection](https://mmdetection.readthedocs.io/en/3.x/model_zoo.html).
Loading

0 comments on commit 7e06cbb

Please sign in to comment.