Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Docs] add reid training README.md #210

Merged
merged 5 commits into from
Jul 14, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion configs/mot/deepsort/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@

## Results and models on MOT17

We implement SORT and DeepSORT with independent detector and ReID models. To train a model by yourself, you need to train a detector following [here](../../det/) and also train a ReID model.
We implement SORT and DeepSORT with independent detector and ReID models. To train a model by yourself, you need to train a detector following [here](../../det/) and also train a ReID model following [here](../../reid).
The configs in this folder are basically for inference.

Currently we do not support training ReID models.
Expand Down
144 changes: 144 additions & 0 deletions configs/reid/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,144 @@
# Training a ReID Model

You may want to train a ReID model for multiple object tracking or other applications. We support ReID model training in MMTracking, which is built upon [MMClassification](https://github.com/open-mmlab/mmclassification).

## 1.Standard Dataset

This section will show how to train a ReID model on standard datasets i.e. MOT17.

### Dataset Preparation

We need to download datasets following [here](https://github.com/open-mmlab/mmtracking/blob/master/docs/dataset.md). We use [ReIDDataset](https://github.com/open-mmlab/mmtracking/blob/master/mmtrack/datasets/reid_dataset.py) inherited from [BaseDataset](https://github.com/open-mmlab/mmclassification/blob/master/mmcls/datasets/base_dataset.py) to maintain standard datasets. In this case, you need to convert the official dataset to this style. We provide scripts and the usages as follow:

```python
python ./tools/convert_datasets/mot2reid.py -i ./data/MOT17/ -o ./data/MOT17/reid --val-split 0.2 --vis-threshold 0.3
```

Arguments:

- `--val-split`: Proportion of the validation dataset to the whole ReID dataset.
- `--vis-threshold`: Threshold of visibility for each person.

The directory of the converted datasets is as follows:

```
MOT17
├── train
├── test
├── reid
│ ├── imgs
│ │ ├── MOT17-02-FRCNN_000002
│ │ │ ├── 000000.jpg
│ │ │ ├── 000001.jpg
│ │ │ ├── ...
│ │ ├── MOT17-02-FRCNN_000003
│ │ │ ├── 000000.jpg
│ │ │ ├── 000001.jpg
│ │ │ ├── ...
│ ├── meta
│ │ ├── train_80.txt
│ │ ├── val_20.txt
```

Note: `80` in `train_80.txt` means the proportion of the training dataset to the whole ReID dataset is eighty percent. While the proportion of the validation dataset is twenty percent.

For training, we provide a annotation list `train_80.txt`. Each line of the list contrains a filename and its corresponding ground-truth labels. The format is as follows:

```
MOT17-05-FRCNN_000110/000018.jpg 0
MOT17-13-FRCNN_000146/000014.jpg 1
MOT17-05-FRCNN_000088/000004.jpg 2
MOT17-02-FRCNN_000009/000081.jpg 3
```

For validation, The annotation list `val_20.txt` remains the same as format above.

Note: Images in `MOT17/reid/imgs` are cropped from raw images in `MOT17/train` by the corresponding `gt.txt`. The value of ground-truth labels should fall in range `[0, num_classes - 1]`.

### Training

#### Training on a single GPU

```shell
python tools/train.py configs/reid/resnet50_b32*8_MOT17.py [optional arguments]
```

During training, log files and checkpoints will be saved to the working directory, which is specified by `work_dir` in the config file or via CLI argument `--work-dir`.

#### Training on multiple GPUs

We provide `tools/dist_train.sh` to launch training on multiple GPUs.
The basic usage is as follows.

```shell
bash ./tools/dist_train.sh \
configs/reid/resnet50_b32*8_MOT17.py \
${GPU_NUM} \
[optional arguments]
```

Optional arguments remain the same as stated above.

For more training details, please refer to [here](https://github.com/open-mmlab/mmtracking/blob/master/docs/quick_run.md).

## 2.Customize Dataset

This section will show how to train a ReID model on customize datasets.

### Dataset Preparation

You need to convert your customize datasets to existing dataset format.

#### An example of customized dataset

Assume we are going to implement a `Filelist` dataset, which takes filelists for both training and testing. The directory of the dataset is as follows:

```
Filelist
├── imgs
│ ├── person1
│ │ ├── 000000.jpg
│ │ ├── 000001.jpg
│ │ ├── ...
│ ├── person2
│ │ ├── 000000.jpg
│ │ ├── 000001.jpg
│ │ ├── ...
├── meta
│ ├── train.txt
│ ├── val.txt
```

The format of annotation list is as follows:

```
person1/000000.jpg 0
person1/000001.jpg 0
person2/000000.jpg 1
person2/000001.jpg 1
```

You can directly use [ReIDDataset](https://github.com/open-mmlab/mmtracking/blob/master/mmtrack/datasets/reid_dataset.py). In this case, you only need to modify the config as follows:

```python
# modify the path of annotation files and the image path prefix
data = dict(
train=dict(
data_prefix='data/Filelist/imgs',
ann_file='data/Filelist/meta/train.txt'),
val=dict(
data_prefix='data/Filelist/imgs',
ann_file='data/Filelist/meta/val.txt'),
test=dict(
data_prefix='data/Filelist/imgs',
ann_file='data/Filelist/meta/val.txt'),
)
# modify the number of classes, assume your training set has 100 classes
model = dict(reid=dict(head=dict(num_classes=100)))
```

You also can write a new Dataset class inherited from `BaseDataset`, and overwrite `load_annotations(self)`. For more details, you can follow [here](https://github.com/open-mmlab/mmclassification/blob/master/docs/tutorials/new_dataset.md).

### Training

The training stage is the same as `Standard Dataset`.
8 changes: 4 additions & 4 deletions docs_zh-CN/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@
```shell
conda install pytorch==1.5 cudatoolkit=10.1 torchvision -c pytorch
```

`例 2` 例如在 `/usr/local/cuda` 安装了 CUDA 9.2,并想安装 PyTorch 1.3.1,则需要安装支持 CUDA 9.2 的预构建 PyTorch:

```shell
Expand Down Expand Up @@ -69,7 +69,7 @@
```shell
pip install mmcv-full
```

4. 安装 MMDetection:

```shell
Expand All @@ -84,14 +84,14 @@
pip install -r requirements/build.txt
pip install -v -e . # or "python setup.py develop"
```

5. 将 MMTracking 仓库克隆到本地:

```shell
git clone https://github.com/open-mmlab/mmtracking.git
cd mmtracking
```

6. 首先安装依赖,然后安装 MMTracking:

```shell
Expand Down