Skip to content

Commit

Permalink
[Docs] Replace the models used in the tutorial document with RTMDet (o…
Browse files Browse the repository at this point in the history
  • Loading branch information
Zheng-LinXiao authored Feb 27, 2023
1 parent 13aa724 commit ffc2bb3
Show file tree
Hide file tree
Showing 12 changed files with 292 additions and 474 deletions.
376 changes: 95 additions & 281 deletions demo/inference_demo.ipynb

Large diffs are not rendered by default.

10 changes: 5 additions & 5 deletions docs/en/get_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,17 +76,17 @@ To verify whether MMDetection is installed correctly, we provide some sample cod
**Step 1.** We need to download config and checkpoint files.

```shell
mim download mmdet --config yolov3_mobilenetv2_8xb24-320-300e_coco --dest .
mim download mmdet --config rtmdet_tiny_8xb32-300e_coco --dest .
```

The downloading will take several seconds or more, depending on your network environment. When it is done, you will find two files `yolov3_mobilenetv2_8xb24-320-300e_coco.py` and `yolov3_mobilenetv2_320_300e_coco_20210719_215349-d18dff72.pth` in your current folder.
The downloading will take several seconds or more, depending on your network environment. When it is done, you will find two files `rtmdet_tiny_8xb32-300e_coco.py` and `rtmdet_tiny_8xb32-300e_coco_20220902_112414-78e30dcc.pth` in your current folder.

**Step 2.** Verify the inference demo.

Case a: If you install MMDetection from source, just run the following command.

```shell
python demo/image_demo.py demo/demo.jpg yolov3_mobilenetv2_8xb24-320-300e_coco.py --weights yolov3_mobilenetv2_320_300e_coco_20210719_215349-d18dff72.pth --device cpu
python demo/image_demo.py demo/demo.jpg rtmdet_tiny_8xb32-300e_coco.py --weights rtmdet_tiny_8xb32-300e_coco_20220902_112414-78e30dcc.pth --device cpu
```

You will see a new image `demo.jpg` on your `./outputs/vis` folder, where bounding boxes are plotted on cars, benches, etc.
Expand All @@ -96,8 +96,8 @@ Case b: If you install MMDetection with MIM, open your python interpreter and co
```python
from mmdet.apis import init_detector, inference_detector

config_file = 'yolov3_mobilenetv2_8xb24-320-300e_coco.py'
checkpoint_file = 'yolov3_mobilenetv2_320_300e_coco_20210719_215349-d18dff72.pth'
config_file = 'rtmdet_tiny_8xb32-300e_coco.py'
checkpoint_file = 'rtmdet_tiny_8xb32-300e_coco_20220902_112414-78e30dcc.pth'
model = init_detector(config_file, checkpoint_file, device='cpu') # or device='cuda:0'
inference_detector(model, 'demo/demo.jpg')
```
Expand Down
66 changes: 34 additions & 32 deletions docs/en/user_guides/config.md

Large diffs are not rendered by default.

24 changes: 12 additions & 12 deletions docs/en/user_guides/inference.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,9 @@
MMDetection provides hundreds of pre-trained detection models in [Model Zoo](https://mmdetection.readthedocs.io/en/latest/model_zoo.html).
This note will show how to inference, which means using trained models to detect objects on images.

In MMDetection, a model is defined by a [configuration file](config.md) and existing model parameters are saved in a checkpoint file.
In MMDetection, a model is defined by a [configuration file](https://mmdetection.readthedocs.io/en/3.x/user_guides/config.html) and existing model parameters are saved in a checkpoint file.

To start with, we recommend [Faster RCNN](https://github.com/open-mmlab/mmdetection/blob/3.x/configs/faster_rcnn) with this [configuration file](https://github.com/open-mmlab/mmdetection/blob/3.x/configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py) and this [checkpoint file](https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth). It is recommended to download the checkpoint file to `checkpoints` directory.
To start with, we recommend [RTMDet](https://github.com/open-mmlab/mmdetection/tree/3.x/configs/rtmdet) with this [configuration file](https://github.com/open-mmlab/mmdetection/blob/3.x/configs/rtmdet/rtmdet_l_8xb32-300e_coco.py) and this [checkpoint file](https://download.openmmlab.com/mmdetection/v3.0/rtmdet/rtmdet_l_8xb32-300e_coco/rtmdet_l_8xb32-300e_coco_20220719_112030-5a0be7c4.pth). It is recommended to download the checkpoint file to `checkpoints` directory.

## High-level APIs for inference

Expand All @@ -21,8 +21,8 @@ from mmdet.apis import init_detector, inference_detector


# Specify the path to model config and checkpoint file
config_file = 'configs/faster_rcnn/faster-rcnn_r50-fpn_1x_coco.py'
checkpoint_file = 'checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth'
config_file = 'configs/rtmdet/rtmdet_l_8xb32-300e_coco.py'
checkpoint_file = 'checkpoints/rtmdet_l_8xb32-300e_coco_20220719_112030-5a0be7c4.pth'

# Build the model from a config file and a checkpoint file
model = init_detector(config_file, checkpoint_file, device='cuda:0')
Expand Down Expand Up @@ -110,8 +110,8 @@ Examples:

```shell
python demo/image_demo.py demo/demo.jpg \
configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py \
--weights checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth \
configs/rtmdet/rtmdet_l_8xb32-300e_coco.py \
--weights checkpoints/rtmdet_l_8xb32-300e_coco_20220719_112030-5a0be7c4.pth \
--device cpu
```

Expand All @@ -132,8 +132,8 @@ Examples:

```shell
python demo/webcam_demo.py \
configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py \
checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth
configs/rtmdet/rtmdet_l_8xb32-300e_coco.py \
checkpoints/rtmdet_l_8xb32-300e_coco_20220719_112030-5a0be7c4.pth
```

### Video demo
Expand All @@ -156,8 +156,8 @@ Examples:

```shell
python demo/video_demo.py demo/demo.mp4 \
configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py \
checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth \
configs/rtmdet/rtmdet_l_8xb32-300e_coco.py \
checkpoints/rtmdet_l_8xb32-300e_coco_20220719_112030-5a0be7c4.pth \
--out result.mp4
```

Expand All @@ -182,7 +182,7 @@ Examples:

```shell
python demo/video_gpuaccel_demo.py demo/demo.mp4 \
configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py \
checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth \
configs/rtmdet/rtmdet_l_8xb32-300e_coco.py \
checkpoints/rtmdet_l_8xb32-300e_coco_20220719_112030-5a0be7c4.pth \
--nvdecode --out result.mp4
```
16 changes: 8 additions & 8 deletions docs/en/user_guides/test.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,23 +54,23 @@ Optional arguments:

Assuming that you have already downloaded the checkpoints to the directory `checkpoints/`.

1. Test Faster R-CNN and visualize the results. Press any key for the next image.
Config and checkpoint files are available [here](../../../configs/faster_rcnn).
1. Test RTMDet and visualize the results. Press any key for the next image.
Config and checkpoint files are available [here](https://github.com/open-mmlab/mmdetection/tree/3.x/configs/rtmdet).

```shell
python tools/test.py \
configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py \
checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth \
configs/rtmdet/rtmdet_l_8xb32-300e_coco.py \
checkpoints/rtmdet_l_8xb32-300e_coco_20220719_112030-5a0be7c4.pth \
--show
```

2. Test Faster R-CNN and save the painted images for future visualization.
Config and checkpoint files are available [here](../../../configs/faster_rcnn).
2. Test RTMDet and save the painted images for future visualization.
Config and checkpoint files are available [here](https://github.com/open-mmlab/mmdetection/tree/3.x/configs/rtmdet).

```shell
python tools/test.py \
configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py \
checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth \
configs/rtmdet/rtmdet_l_8xb32-300e_coco.py \
checkpoints/rtmdet_l_8xb32-300e_coco_20220719_112030-5a0be7c4.pth \
--show-dir faster_rcnn_r50_fpn_1x_results
```

Expand Down
6 changes: 3 additions & 3 deletions docs/en/user_guides/train.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ You could download the existing models in advance if the network connection is u

## Learning rate auto scaling

**Important**: The default learning rate in config files is for 8 GPUs and 2 sample per GPU (batch size = 8 * 2 = 16). And it had been set to `auto_scale_lr.base_batch_size` in `config/_base_/default_runtime.py`. Learning rate will be automatically scaled base on this value when the batch size is `16`. Meanwhile, in order not to affect other codebase which based on mmdet, the flag `auto_scale_lr.enable` is set to `False` by default.
**Important**: The default learning rate in config files is for 8 GPUs and 2 sample per GPU (batch size = 8 * 2 = 16). And it had been set to `auto_scale_lr.base_batch_size` in `config/_base_/schedules/schedule_1x.py`. Learning rate will be automatically scaled base on this value when the batch size is `16`. Meanwhile, in order not to affect other codebase which based on mmdet, the flag `auto_scale_lr.enable` is set to `False` by default.

If you want to enable this feature, you need to add argument `--auto-scale-lr`. And you need to check the config name which you want to use before you process the command, because the config name indicates the default batch size.
By default, it is `8 x 2 = 16 batch size`, like `faster_rcnn_r50_caffe_fpn_90k_coco.py` or `pisa_faster_rcnn_x101_32x4d_fpn_1x_coco.py`. In other cases, you will see the config file name have `_NxM_` in dictating, like `cornernet_hourglass104_mstest_32x3_210e_coco.py` which batch size is `32 x 3 = 96`, or `scnet_x101_64x4d_fpn_8x1_20e_coco.py` which batch size is `8 x 1 = 8`.
Expand Down Expand Up @@ -436,7 +436,7 @@ To train a model with the new config, you can simply run
python tools/train.py configs/balloon/mask-rcnn_r50-caffe_fpn_ms-poly-1x_balloon.py
```

For more detailed usages, please refer to the [training guide](train.md).
For more detailed usages, please refer to the [training guide](https://mmdetection.readthedocs.io/en/3.x/user_guides/train.html#train-predefined-models-on-standard-datasets).

## Test and inference

Expand All @@ -446,4 +446,4 @@ To test the trained model, you can simply run
python tools/test.py configs/balloon/mask-rcnn_r50-caffe_fpn_ms-poly-1x_balloon.py work_dirs/mask-rcnn_r50-caffe_fpn_ms-poly-1x_balloon/epoch_12.pth
```

For more detailed usages, please refer to the [testing guide](test.md).
For more detailed usages, please refer to the [testing guide](https://mmdetection.readthedocs.io/en/3.x/user_guides/test.html).
10 changes: 5 additions & 5 deletions docs/zh_cn/get_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,17 +75,17 @@ mim install "mmdet>=3.0.0rc0"
**步骤 1.** 我们需要下载配置文件和模型权重文件。

```shell
mim download mmdet --config yolov3_mobilenetv2_8xb24-320-300e_coco --dest .
mim download mmdet --config rtmdet_tiny_8xb32-300e_coco --dest .
```

下载将需要几秒钟或更长时间,这取决于你的网络环境。完成后,你会在当前文件夹中发现两个文件 `yolov3_mobilenetv2_8xb24-320-300e_coco.py``yolov3_mobilenetv2_320_300e_coco_20210719_215349-d18dff72.pth`
下载将需要几秒钟或更长时间,这取决于你的网络环境。完成后,你会在当前文件夹中发现两个文件 `rtmdet_tiny_8xb32-300e_coco.py``rtmdet_tiny_8xb32-300e_coco_20220902_112414-78e30dcc.pth`

**步骤 2.** 推理验证。

方案 a:如果你通过源码安装的 MMDetection,那么直接运行以下命令进行验证:

```shell
python demo/image_demo.py demo/demo.jpg yolov3_mobilenetv2_8xb24-320-300e_coco.py --weights yolov3_mobilenetv2_320_300e_coco_20210719_215349-d18dff72.pth --device cpu
python demo/image_demo.py demo/demo.jpg rtmdet_tiny_8xb32-300e_coco.py --weights rtmdet_tiny_8xb32-300e_coco_20220902_112414-78e30dcc.pth --device cpu
```

你会在当前文件夹中的 `outputs/vis` 文件夹中看到一个新的图像 `demo.jpg`,图像中包含有网络预测的检测框。
Expand All @@ -95,8 +95,8 @@ python demo/image_demo.py demo/demo.jpg yolov3_mobilenetv2_8xb24-320-300e_coco.p
```python
from mmdet.apis import init_detector, inference_detector

config_file = 'yolov3_mobilenetv2_8xb24-320-300e_coco.py'
checkpoint_file = 'yolov3_mobilenetv2_320_300e_coco_20210719_215349-d18dff72.pth'
config_file = 'rtmdet_tiny_8xb32-300e_coco.py'
checkpoint_file = 'rtmdet_tiny_8xb32-300e_coco_20220902_112414-78e30dcc.pth'
model = init_detector(config_file, checkpoint_file, device='cpu') # or device='cuda:0'
inference_detector(model, 'demo/demo.jpg')
```
Expand Down
Loading

0 comments on commit ffc2bb3

Please sign in to comment.