From 11d2a1c87202cd03325bc59d0b37d8efdaf1de49 Mon Sep 17 00:00:00 2001 From: ToumaKazusa3 <36536092+ToumaKazusa3@users.noreply.github.com> Date: Wed, 14 Jul 2021 18:25:32 +0800 Subject: [PATCH] [Docs] add reid training README.md (#210) * [Docs] add reid training readme * fix some bug * fix a bug * change basedataset --- configs/mot/deepsort/README.md | 2 +- configs/reid/README.md | 144 +++++++++++++++++++++++++++++++++ docs_zh-CN/install.md | 8 +- 3 files changed, 149 insertions(+), 5 deletions(-) create mode 100644 configs/reid/README.md diff --git a/configs/mot/deepsort/README.md b/configs/mot/deepsort/README.md index 58b084e44..dd376c33a 100644 --- a/configs/mot/deepsort/README.md +++ b/configs/mot/deepsort/README.md @@ -25,7 +25,7 @@ ## Results and models on MOT17 -We implement SORT and DeepSORT with independent detector and ReID models. To train a model by yourself, you need to train a detector following [here](../../det/) and also train a ReID model. +We implement SORT and DeepSORT with independent detector and ReID models. To train a model by yourself, you need to train a detector following [here](../../det/) and also train a ReID model following [here](../../reid). The configs in this folder are basically for inference. Currently we do not support training ReID models. diff --git a/configs/reid/README.md b/configs/reid/README.md new file mode 100644 index 000000000..aa044a2d4 --- /dev/null +++ b/configs/reid/README.md @@ -0,0 +1,144 @@ +# Training a ReID Model + +You may want to train a ReID model for multiple object tracking or other applications. We support ReID model training in MMTracking, which is built upon [MMClassification](https://github.com/open-mmlab/mmclassification). + +## 1.Standard Dataset + +This section will show how to train a ReID model on standard datasets i.e. MOT17. + +### Dataset Preparation + +We need to download datasets following [here](https://github.com/open-mmlab/mmtracking/blob/master/docs/dataset.md). We use [ReIDDataset](https://github.com/open-mmlab/mmtracking/blob/master/mmtrack/datasets/reid_dataset.py) inherited from [BaseDataset](https://github.com/open-mmlab/mmclassification/blob/master/mmcls/datasets/base_dataset.py) to maintain standard datasets. In this case, you need to convert the official dataset to this style. We provide scripts and the usages as follow: + +```python +python ./tools/convert_datasets/mot2reid.py -i ./data/MOT17/ -o ./data/MOT17/reid --val-split 0.2 --vis-threshold 0.3 +``` + +Arguments: + +- `--val-split`: Proportion of the validation dataset to the whole ReID dataset. +- `--vis-threshold`: Threshold of visibility for each person. + +The directory of the converted datasets is as follows: + +``` +MOT17 +├── train +├── test +├── reid +│ ├── imgs +│ │ ├── MOT17-02-FRCNN_000002 +│ │ │ ├── 000000.jpg +│ │ │ ├── 000001.jpg +│ │ │ ├── ... +│ │ ├── MOT17-02-FRCNN_000003 +│ │ │ ├── 000000.jpg +│ │ │ ├── 000001.jpg +│ │ │ ├── ... +│ ├── meta +│ │ ├── train_80.txt +│ │ ├── val_20.txt +``` + +Note: `80` in `train_80.txt` means the proportion of the training dataset to the whole ReID dataset is eighty percent. While the proportion of the validation dataset is twenty percent. + +For training, we provide a annotation list `train_80.txt`. Each line of the list contrains a filename and its corresponding ground-truth labels. The format is as follows: + +``` +MOT17-05-FRCNN_000110/000018.jpg 0 +MOT17-13-FRCNN_000146/000014.jpg 1 +MOT17-05-FRCNN_000088/000004.jpg 2 +MOT17-02-FRCNN_000009/000081.jpg 3 +``` + +For validation, The annotation list `val_20.txt` remains the same as format above. + +Note: Images in `MOT17/reid/imgs` are cropped from raw images in `MOT17/train` by the corresponding `gt.txt`. The value of ground-truth labels should fall in range `[0, num_classes - 1]`. + +### Training + +#### Training on a single GPU + +```shell +python tools/train.py configs/reid/resnet50_b32*8_MOT17.py [optional arguments] +``` + +During training, log files and checkpoints will be saved to the working directory, which is specified by `work_dir` in the config file or via CLI argument `--work-dir`. + +#### Training on multiple GPUs + +We provide `tools/dist_train.sh` to launch training on multiple GPUs. +The basic usage is as follows. + +```shell +bash ./tools/dist_train.sh \ + configs/reid/resnet50_b32*8_MOT17.py \ + ${GPU_NUM} \ + [optional arguments] +``` + +Optional arguments remain the same as stated above. + +For more training details, please refer to [here](https://github.com/open-mmlab/mmtracking/blob/master/docs/quick_run.md). + +## 2.Customize Dataset + +This section will show how to train a ReID model on customize datasets. + +### Dataset Preparation + +You need to convert your customize datasets to existing dataset format. + +#### An example of customized dataset + +Assume we are going to implement a `Filelist` dataset, which takes filelists for both training and testing. The directory of the dataset is as follows: + +``` +Filelist +├── imgs +│ ├── person1 +│ │ ├── 000000.jpg +│ │ ├── 000001.jpg +│ │ ├── ... +│ ├── person2 +│ │ ├── 000000.jpg +│ │ ├── 000001.jpg +│ │ ├── ... +├── meta +│ ├── train.txt +│ ├── val.txt +``` + +The format of annotation list is as follows: + +``` +person1/000000.jpg 0 +person1/000001.jpg 0 +person2/000000.jpg 1 +person2/000001.jpg 1 +``` + +You can directly use [ReIDDataset](https://github.com/open-mmlab/mmtracking/blob/master/mmtrack/datasets/reid_dataset.py). In this case, you only need to modify the config as follows: + +```python +# modify the path of annotation files and the image path prefix +data = dict( + train=dict( + data_prefix='data/Filelist/imgs', + ann_file='data/Filelist/meta/train.txt'), + val=dict( + data_prefix='data/Filelist/imgs', + ann_file='data/Filelist/meta/val.txt'), + test=dict( + data_prefix='data/Filelist/imgs', + ann_file='data/Filelist/meta/val.txt'), +) +# modify the number of classes, assume your training set has 100 classes +model = dict(reid=dict(head=dict(num_classes=100))) +``` + +You also can write a new Dataset class inherited from `BaseDataset`, and overwrite `load_annotations(self)`. For more details, you can follow [here](https://github.com/open-mmlab/mmclassification/blob/master/docs/tutorials/new_dataset.md). + +### Training + +The training stage is the same as `Standard Dataset`. diff --git a/docs_zh-CN/install.md b/docs_zh-CN/install.md index cbdf9c893..7ac30877e 100644 --- a/docs_zh-CN/install.md +++ b/docs_zh-CN/install.md @@ -40,7 +40,7 @@ ```shell conda install pytorch==1.5 cudatoolkit=10.1 torchvision -c pytorch ``` - + `例 2` 例如在 `/usr/local/cuda` 安装了 CUDA 9.2,并想安装 PyTorch 1.3.1,则需要安装支持 CUDA 9.2 的预构建 PyTorch: ```shell @@ -69,7 +69,7 @@ ```shell pip install mmcv-full ``` - + 4. 安装 MMDetection: ```shell @@ -84,14 +84,14 @@ pip install -r requirements/build.txt pip install -v -e . # or "python setup.py develop" ``` - + 5. 将 MMTracking 仓库克隆到本地: ```shell git clone https://github.com/open-mmlab/mmtracking.git cd mmtracking ``` - + 6. 首先安装依赖,然后安装 MMTracking: ```shell