Skip to content

Latest commit

 

History

History
157 lines (124 loc) · 8.88 KB

README_ORIG.md

File metadata and controls

157 lines (124 loc) · 8.88 KB

Introduction

Deep-person-reid is a pytorch-based framework for training and evaluating deep person re-identification models on reid benchmarks.

It has the following features:

  • multi-GPU training.
  • both image reid and video reid.
  • standard dataset splits used by most research papers.
  • incredibly easy preparation of reid datasets.
  • implementations of state-of-the-art reid models.
  • end-to-end training and evaluation.
  • multi-dataset training.
  • visualization of ranked results.
  • state-of-the-art training techniques.

Updates

  • 11-11-2018 (New): Added multi-dataset training; Added cython code for cuhk03-style evaliation; Wrapped dataloader construction to Image/Video-DataManager; Wrapped argparse to args.py; Added MLFN (CVPR'18).

Installation

  1. Run git clone https://github.com/KaiyangZhou/deep-person-reid.
  2. Install dependencies by pip install -r requirements.txt (if necessary).
  3. To install the cython-based evaluation toolbox, cd to torchreid/eval_cylib and do make. As a result, eval_metrics_cy.so is generated under the same folder. Run python test_cython.py to test if the toolbox is installed successfully. (credit to luzai)

Datasets

Image-reid datasets:

Video-reid datasets:

The keys to use these datasets are enclosed in the parentheses. See torchreid/datasets/__init__.py for details. The data managers of image reid and video reid are implemented in torchreid/data_manager.py.

Instructions regarding how to prepare (and do evaluation on) these datasets can be found in DATASETS.md.

Models

ImageNet classification models

Lightweight models

ReID-specific models

Please refer to torchreid/models/__init__.py for the keys to build these models. In the MODEL_ZOO, we provide pretrained model weights and the training scripts to reproduce the results.

Losses

Tutorial

Train

Training methods are implemented in

  • train_imgreid_xent.py: train image-reid models with cross entropy loss.
  • train_imgreid_xent_htri.py: train image-reid models with hard mining triplet loss or the combination of hard mining triplet loss and cross entropy loss.
  • train_imgreid_xent.py: train video-reid models with cross entropy loss.
  • train_imgreid_xent_htri.py: train video-reid models with hard mining triplet loss or the combination of hard mining triplet loss and cross entropy loss.

Input arguments for the above training scripts are unified in args.py.

To train an image-reid model with cross entropy loss, you can do

python train_imgreid_xent.py \
-s market1501 \ # source dataset for training
-t market1501 \ # target dataset for test
--height 256 \ # image height
--width 128 \ # image width
--optim amsgrad \ # optimizer
--label-smooth \ # label smoothing regularizer
--lr 0.0003 \ # learning rate
--max-epoch 60 \ # maximum epoch to run
--stepsize 20 40 \ # stepsize for learning rate decay
--train-batch-size 32 \
--test-batch-size 100 \
-a resnet50 \ # network architecture
--save-dir log/resnet50-market-xent \ # where to save the log and models
--gpu-devices 0 \ # gpu device index

Multi-dataset training

-s and -t can take different strings of arbitrary length (delimited by space). For example, if you wanna train models on Market1501 + DukeMTMC-reID and test on both of them, you can use -s market1501 dukemtmcreid and -t market1501 dukemtmcreid. If say, you wanna test on a different dataset, e.g. MSMT17, then just do -t msmt17. Multi-dataset training is implemented for both image-reid and video-reid. Note that when -t takes multiple datasets, evaluation is performed on each dataset individually.

Two-stepped transfer learning

To finetune models pretrained on external large-scale datasets such as ImageNet, the two-stepped training strategy is useful.

First, the base network is frozen and only the randomly initialized layers (e.g. identity classification layer) are trained for --fixbase-epoch epochs. Specifically, the layers specified by --open-layers are set to the train mode and will be updated, while other layers are set to the eval mode and are frozen. See open_specified_layers(model, open_layers) in torchreid/utils/torchtools.py.

Second, after the new layers are adapted to the old layers, all layers are set to the train mode and are trained for --max-epoch epochs. See open_all_layers(model) in torchreid/utils/torchtools.py

For example, to train the resnet50 with a classifier being initialized randomly, you can set --fixbase-epoch 5 and --open-layers classifier. The layer names must align with the attribute names in the model, i.e. self.classifier exists in the model.

Using hard mining triplet loss

htri requires adding --train-sampler RandomIdentitySampler.

Training video-reid models

For video reid, test-batch-size refers to the number of tracklets, so the real image batch size is --test-batch-size * --seq-len.

Test

Evaluation mode

Use --evaluate to switch to the evaluation mode. In doing so, no model training is performed. For example, say you wanna load model weights at path_to/resnet50.pth.tar for resnet50 and do evaluation on Market1501, you can do

python train_imgreid_xent.py \
-s market1501 \ # this does not matter any more
-t market1501 \ # you can add more datasets here for the test list
--height 256 \
--width 128 \
--test-batch-size 100 \
--evaluate \
-a resnet50 \
--load-weights path_to/resnet50.pth.tar \
--save-dir log/eval-resnet50 \
--gpu-devices 0 \

Note that --load-weights will discard layer weights in path_to/resnet50.pth.tar that do not match the original model layers in size.

Evaluation frequency

Use --eval-freq to control the evaluation frequency and --start-eval to indicate when to start counting the evaluation frequency.

Visualize ranked results

Ranked results can be visualized via --visualize-ranks, which works along with --evaluate. Ranked images will be saved in save_dir/ranked_results where save_dir is the directory you specify with --save-dir. This function is implemented in torchreid/utils/reidtools.py.

Misc

Citation

Please link this project in your paper.

License

This project is under the MIT License.