Skip to content

Commit

Permalink
[Doc] Add model zoo and update doc index (open-mmlab#1618)
Browse files Browse the repository at this point in the history
* add task-level README files

* update README.md

* update compiling commands

* update doc dependency

* fix bugs

* update cn readme

* update cn readme

* update sphinx version

* fix bug

* modify doc structure

* fix bug

* add cn doc skeleton

* update cn docs
  • Loading branch information
ly015 authored Sep 1, 2022
1 parent 6be61ce commit bf67628
Show file tree
Hide file tree
Showing 198 changed files with 7,331 additions and 2,827 deletions.
234 changes: 105 additions & 129 deletions README.md

Large diffs are not rendered by default.

238 changes: 107 additions & 131 deletions README_CN.md

Large diffs are not rendered by default.

Empty file.
18 changes: 18 additions & 0 deletions configs/animal_2d_keypoint/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# 2D Animal Keypoint Detection

2D animal keypoint detection (animal pose estimation) aims to detect the key-point of different species, including rats,
dogs, macaques, and cheetah. It provides detailed behavioral analysis for neuroscience, medical and ecology applications.

## Data preparation

Please follow [DATA Preparation](/docs/en/dataset_zoo/2d_animal_keypoint.md) to prepare data.

## Demo

Please follow [DEMO](/demo/docs/2d_animal_demo.md) to generate fancy demos.

<img src="https://user-images.githubusercontent.com/11788150/114201893-4446ec00-9989-11eb-808b-5718c47c7b23.gif" height="140px" alt><br>

<img src="https://user-images.githubusercontent.com/11788150/114205282-b5d46980-998c-11eb-9d6b-85ba47f81252.gif" height="140px" alt><br>

<img src="https://user-images.githubusercontent.com/11788150/114023530-944c8280-98a5-11eb-86b0-5f6d3e232af0.gif" height="140px" alt><br>
7 changes: 7 additions & 0 deletions configs/animal_2d_keypoint/topdown_heatmap/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# Top-down heatmap-based pose estimation

Top-down methods divide the task into two stages: object detection and pose estimation.

They perform object detection first, followed by single-object pose estimation given object bounding boxes.
Instead of estimating keypoint coordinates directly, the pose estimator will produce heatmaps which represent the
likelihood of being a keypoint.
Empty file removed configs/body_2d_keypoint/.gitkeep
Empty file.
19 changes: 19 additions & 0 deletions configs/body_2d_keypoint/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
# Human Body 2D Pose Estimation

Multi-person human pose estimation is defined as the task of detecting the poses (or keypoints) of all people from an input image.

Existing approaches can be categorized into top-down and bottom-up approaches.

Top-down methods (e.g. DeepPose) divide the task into two stages: human detection and pose estimation. They perform human detection first, followed by single-person pose estimation given human bounding boxes.

Bottom-up approaches (e.g. Associative Embedding) first detect all the keypoints and then group/associate them into person instances.

## Data preparation

Please follow [DATA Preparation](/docs/en/dataset_zoo/2d_body_keypoint.md) to prepare data.

## Demo

Please follow [Demo](/demo/docs/2d_human_pose_demo.md#2d-human-pose-demo) to run demos.

<img src="/demo/resources/demo_coco.gif" width="600px" alt>
7 changes: 7 additions & 0 deletions configs/body_2d_keypoint/topdown_heatmap/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# Top-down heatmap-based pose estimation

Top-down methods divide the task into two stages: object detection and pose estimation.

They perform object detection first, followed by single-object pose estimation given object bounding boxes.
Instead of estimating keypoint coordinates directly, the pose estimator will produce heatmaps which represent the
likelihood of being a keypoint.
5 changes: 5 additions & 0 deletions configs/body_2d_keypoint/topdown_regression/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# Top-down regression-based pose estimation

Top-down methods divide the task into two stages: object detection and pose estimation.

They perform object detection first, followed by single-object pose estimation given object bounding boxes. With features extracted from the bounding box area, the model learns to directly regress the keypoint coordinates.
Empty file removed configs/body_3d_keypoint/.gitkeep
Empty file.
13 changes: 13 additions & 0 deletions configs/body_3d_keypoint/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# Human Body 3D Pose Estimation

3D human body pose estimation aims at predicting the X, Y, Z coordinates of human body joints. Based on the camera number to capture the images or videos, existing works can be further divided into multi-view methods and single-view (monocular) methods.

## Data preparation

Please follow [DATA Preparation](/docs/en/dataset_zoo/3d_body_keypoint.md) to prepare data.

## Demo

Please follow [Demo](/demo/docs/3d_human_pose_demo.md) to run demos.

<img src="https://user-images.githubusercontent.com/15977946/118820606-02df2000-b8e9-11eb-9984-b9228101e780.gif" width="600px" alt><br>
Empty file removed configs/face_2d_keypoint/.gitkeep
Empty file.
16 changes: 16 additions & 0 deletions configs/face_2d_keypoint/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# 2D Face Landmark Detection

2D face landmark detection (also referred to as face alignment) is defined as the task of detecting the face keypoints from an input image.

Normally, the input images are cropped face images, where the face locates at the center;
or the rough location (or the bounding box) of the hand is provided.

## Data preparation

Please follow [DATA Preparation](/docs/en/dataset_zoo/2d_face_keypoint.md) to prepare data.

## Demo

Please follow [Demo](/demo/docs/2d_face_demo.md) to run demos.

<img src="https://user-images.githubusercontent.com/11788150/109144943-ccd44900-779c-11eb-9e9d-8682e7629654.gif" width="600px" alt><br>
7 changes: 7 additions & 0 deletions configs/face_2d_keypoint/topdown_heatmap/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# Top-down heatmap-based pose estimation

Top-down methods divide the task into two stages: object detection and pose estimation.

They perform object detection first, followed by single-object pose estimation given object bounding boxes.
Instead of estimating keypoint coordinates directly, the pose estimator will produce heatmaps which represent the
likelihood of being a keypoint.
Empty file.
7 changes: 7 additions & 0 deletions configs/fashion_2d_keypoint/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# 2D Fashion Landmark Detection

2D fashion landmark detection (also referred to as fashion alignment) aims to detect the key-point located at the functional region of clothes, for example the neckline and the cuff.

## Data preparation

Please follow [DATA Preparation](/docs/en/dataset_zoo/2d_fashion_landmark.md) to prepare data.
Empty file removed configs/hand_2d_keypoint/.gitkeep
Empty file.
16 changes: 16 additions & 0 deletions configs/hand_2d_keypoint/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# 2D Hand Pose Estimation

2D hand pose estimation is defined as the task of detecting the poses (or keypoints) of the hand from an input image.

Normally, the input images are cropped hand images, where the hand locates at the center;
or the rough location (or the bounding box) of the hand is provided.

## Data preparation

Please follow [DATA Preparation](/docs/en/tasks/2d_hand_keypoint.md) to prepare data.

## Demo

Please follow [Demo](/demo/docs/2d_hand_demo.md) to run demos.

<img src="https://user-images.githubusercontent.com/11788150/109098558-8c54db00-775c-11eb-8966-85df96b23dc5.gif" width="600px" alt><br>
7 changes: 7 additions & 0 deletions configs/hand_2d_keypoint/topdown_heatmap/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# Top-down heatmap-based pose estimation

Top-down methods divide the task into two stages: object detection and pose estimation.

They perform object detection first, followed by single-object pose estimation given object bounding boxes.
Instead of estimating keypoint coordinates directly, the pose estimator will produce heatmaps which represent the
likelihood of being a keypoint.
Empty file removed configs/hand_3d_keypoint/.gitkeep
Empty file.
7 changes: 7 additions & 0 deletions configs/hand_3d_keypoint/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# 3D Hand Pose Estimation

3D hand pose estimation is defined as the task of detecting the poses (or keypoints) of the hand from an input image.

## Data preparation

Please follow [DATA Preparation](/docs/en/dataset_zoo/3d_hand_keypoint.md) to prepare data.
Empty file removed configs/hand_gesture/.gitkeep
Empty file.
13 changes: 13 additions & 0 deletions configs/hand_gesture/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# Gesture Recognition

Gesture recognition aims to recognize the hand gestures in the video, such as thumbs up.

## Data preparation

Please follow [DATA Preparation](/docs/en/dataset_zoo/2d_hand_gesture.md) to prepare data.

## Demo

Please follow [Demo](/demo/docs/gesture_recognition_demo.md) to run the demo.

<img src="https://user-images.githubusercontent.com/15977946/172213082-afb9d71a-f2df-4509-932c-e47dc61ec7d7.gif" width="600px" alt>
Empty file.
19 changes: 19 additions & 0 deletions configs/wholebody_2d_keypoint/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
# 2D Human Whole-Body Pose Estimation

2D human whole-body pose estimation aims to localize dense landmarks on the entire human body including face, hands, body, and feet.

Existing approaches can be categorized into top-down and bottom-up approaches.

Top-down methods divide the task into two stages: human detection and whole-body pose estimation. They perform human detection first, followed by single-person whole-body pose estimation given human bounding boxes.

Bottom-up approaches (e.g. AE) first detect all the whole-body keypoints and then group/associate them into person instances.

## Data preparation

Please follow [DATA Preparation](/docs/en/dataset_zoo/2d_wholebody_keypoint.md) to prepare data.

## Demo

Please follow [Demo](/demo/docs/2d_wholebody_pose_demo.md) to run demos.

<img src="https://user-images.githubusercontent.com/9464825/95552839-00a61080-0a40-11eb-818c-b8dad7307217.gif" width="600px" alt><br>
7 changes: 7 additions & 0 deletions configs/wholebody_2d_keypoint/topdown_heatmap/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# Top-down heatmap-based pose estimation

Top-down methods divide the task into two stages: object detection and pose estimation.

They perform object detection first, followed by single-object pose estimation given object bounding boxes.
Instead of estimating keypoint coordinates directly, the pose estimator will produce heatmaps which represent the
likelihood of being a keypoint.
2 changes: 1 addition & 1 deletion demo/docs/2d_animal_demo.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ python demo/topdown_demo_with_mmdet.py \
[--device ${GPU_ID or CPU}]
```

The pre-trained animal pose estimation model can be found from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/tasks/animal.html).
The pre-trained animal pose estimation model can be found from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/animal.html).
Take [animalpose model](https://download.openmmlab.com/mmpose/animal/hrnet/hrnet_w32_animalpose_256x256-1aa7f075_20210426.pth) as an example:

```shell
Expand Down
2 changes: 1 addition & 1 deletion demo/docs/2d_face_demo.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ python demo/topdown_face_demo.py \
[--kpt-thr ${KPT_SCORE_THR}]
```

The pre-trained face keypoint estimation model can be found from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/tasks/face.html).
The pre-trained face keypoint estimation model can be found from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/face.html).
Take [aflw model](https://download.openmmlab.com/mmpose/face/hrnetv2/hrnetv2_w18_aflw_256x256-f2bbc62b_20210125.pth) as an example:

```shell
Expand Down
2 changes: 1 addition & 1 deletion demo/docs/2d_hand_demo.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ python demo/topdown_demo_with_mmdet.py \

```

The pre-trained hand pose estimation model can be downloaded from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/tasks/hand.html).
The pre-trained hand pose estimation model can be downloaded from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/hand.html).
Take [onehand10k model](https://download.openmmlab.com/mmpose/hand/hrnetv2/hrnetv2_w18_onehand10k_256x256-30bc9c6b_20210330.pth) as an example:

```shell
Expand Down
2 changes: 1 addition & 1 deletion demo/docs/2d_human_pose_demo.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ python demo/image_demo.py \
[--draw_heatmap]
```

The pre-trained hand pose estimation model can be downloaded from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/tasks/body.html).
The pre-trained hand pose estimation model can be downloaded from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/body.html).
Take [coco model](https://download.openmmlab.com/mmpose/top_down/hrnet/hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth) as an example:

```shell
Expand Down
2 changes: 1 addition & 1 deletion demo/docs/2d_wholebody_pose_demo.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ python demo/image_demo.py \
[--draw_heatmap]
```

The pre-trained hand pose estimation model can be downloaded from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/tasks/wholebody.html).
The pre-trained hand pose estimation model can be downloaded from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/wholebody.html).
Take [coco-wholebody_vipnas_res50_dark](https://download.openmmlab.com/mmpose/top_down/vipnas/vipnas_res50_wholebody_256x192_dark-67c0ce35_20211112.pth) model as an example:

```shell
Expand Down
Binary file removed docs/en/_src/imgs/acc_curve.png
Binary file not shown.
101 changes: 0 additions & 101 deletions docs/en/collect.py

This file was deleted.

Loading

0 comments on commit bf67628

Please sign in to comment.