This repository contains training and inference code for vehicle re-identification neural networks. The networks are based on the OSNet architecture provided by the deep-object-reid project. The code supports conversion to the ONNX* format.
Model Name | VeRi-776* rank-1 | VeRi-776* mAP | GFlops | MParams | Links |
---|---|---|---|---|---|
vehicle-reid-0001 | 96.31 | 85.15 | 2.643 | 2.183 | shapshot, configuration file |
- Ubuntu* 16.04
- Python* 3.5.2
- PyTorch* 1.3 or higher
- OpenVINO™ 2019 R4 (or newer) with Python API
To create and activate virtual Python environment follow installation instructions
This toolkit contains configs for training on the following datasets:
- VeRi-776
- VeRi-Wild
- UniverseModels (set of make/model classification datasets with merged annotation)
Note: The instruction how to prepare the training dataset can be found in DATA.md
The final Structure of the root directory is as follows:
root
├── veri
│ ├── image_train
│ ├── image_query
│ ├── image_test
│ └── train_label.xml
│
├── veri-wild
│ ├── images
│ └── vehicle_info.txt
│
└── universe_models
└── images
The script for training and inference uses a configuration file default_config.py, which consists of default parameters. This file also has description of parameters. Parameters that you wish to change must be in your own configuration file. Example: vehicle-reid-0001.yaml
To start training, create or choose a configuration file and use the main.py script.
Example:
python ../../../external/deep-object-reid/tools/main.py \
--root /path/to/datasets/directory/root \
--config configs/vehicle-reid-0001.yaml
To start fine-tuning, create or choose a configuration file, choose init model weigts (you can use the pre-trained weights -- see section) and use the main.py script.
Example:
python ../../../external/deep-object-reid/scripts/main.py \
--root /path/to/datasets/directory/root \
--config configs/vehicle-reid-0001.yaml \
model.load_weights /path/to/pretrained/model/weigts
To test your network, specify your configuration file and use the main.py script.
Example:
python ../../../external/deep-object-reid/scripts/main.py \
--root /path/to/datasets/directory/root \
--config configs/vehicle-reid-0001.yaml \
model.load_weights /path/to/trained/model/weigts \
test.evaluate True
Follow the steps below:
-
Convert a PyTorch model to the ONNX format by running the following:
python ../../../external/deep-object-reid/tools/convert_to_onnx.py \ --config /path/to/config/file.yaml \ --output-name /path/to/output/model \ model.load_weights /path/to/trained/model/weigts
Name of the output model ends with
.onnx
automatically. By default, the output model path ismodel.onnx
. -
Convert the obtained ONNX model to the IR format by running the following command:
python <OpenVINO_INSTALL_DIR>/deployment_tools/model_optimizer/mo.py --input_model model.onnx \
--input_shape [1,3,208,208]
--reverse_input_channels
This produces the model.xml
model and weights model.bin
in single-precision floating-point format (FP32).
OpenVINO™ provides the multi-camera-multi-target tracking demo, which is able to use these models as vehicle re-identification networks. See details in the demo.
Original repository: github.com/sovrasov/deep-person-reid/tree/vehicle_reid