Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Support UAV123 Dataset in SOT #260

Merged
merged 32 commits into from
Sep 9, 2021
Merged
Show file tree
Hide file tree
Changes from 23 commits
Commits
Show all changes
32 commits
Select commit Hold shift + click to select a range
324112e
init doc
Aug 23, 2021
bb27f5d
Merge branch 'open-mmlab:master' into master
JingweiZhang12 Aug 25, 2021
82be871
update zh-CN docs
Aug 25, 2021
755792d
Merge branch 'master' of github.com:JingweiZhang12/mmtracking
Aug 26, 2021
240e826
Merge branch 'master' of github.com:open-mmlab/mmtracking
Aug 31, 2021
37bc480
fix some translation error
Aug 31, 2021
fa3d9b1
fix some translation error
Sep 1, 2021
ed74557
add a blank between English and Chinese
Sep 1, 2021
da18a8e
fix some translation error
Sep 2, 2021
b513b85
Merge branch 'open-mmlab:master' into master
JingweiZhang12 Sep 2, 2021
e774c7f
Merge branch 'master' of github.com:open-mmlab/mmtracking
Sep 2, 2021
1f1fcf1
Merge branch 'master' of github.com:JingweiZhang12/mmtracking
Sep 2, 2021
8b4b9ae
fix zh-CN docs
Sep 2, 2021
e823571
fix zh-CN docs
Sep 2, 2021
2f1ee28
support uav
Sep 2, 2021
c1ededf
simplify uav config
Sep 3, 2021
0cfea73
tiny changes in sot_siamrpn_params_search.py
Sep 3, 2021
3bd3014
Merge branch 'master' of github.com:open-mmlab/mmtracking into uav
Sep 3, 2021
1e43fd1
add sot_test_dataset.py
Sep 3, 2021
5242b13
refactor sot test dataset class
Sep 3, 2021
4b63761
update siamrpn README.md
Sep 3, 2021
76dd153
[wip]uav unittest
Sep 7, 2021
37ef6ab
tiny changes in sot unittest
Sep 7, 2021
5600f02
fix docstring
Sep 7, 2021
c029e3c
convert uav to uav123
Sep 8, 2021
42ae7df
add ignore key for evaluation
Sep 8, 2021
8c0f8f1
Merge branch 'uav' of github.com:JingweiZhang12/mmtracking into uav
Sep 8, 2021
6738be3
fix docstring
Sep 8, 2021
76c158f
fix docstring and rename file
Sep 8, 2021
2596abf
Merge branch 'uav' of github.com:JingweiZhang12/mmtracking into uav
Sep 8, 2021
ffe8be4
remove 'ignore' key when converting uav dataset
Sep 9, 2021
98d9c17
fix some typos
Sep 9, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 15 additions & 4 deletions configs/sot/siamese_rpn/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,12 +14,23 @@
}
```

## Results and models on LaSOT dataset
## Results and models on dataset
JingweiZhang12 marked this conversation as resolved.
Show resolved Hide resolved

JingweiZhang12 marked this conversation as resolved.
Show resolved Hide resolved
### LaSOT

We observe around 1.0 points fluctuations in Success and 1.5 points fluctuations in Norm percision. We provide the best model.
JingweiZhang12 marked this conversation as resolved.
Show resolved Hide resolved

Note that all of checkpoints from 11-th to 20-th epoch need to be evaluated in order to achieve the best results.
JingweiZhang12 marked this conversation as resolved.
Show resolved Hide resolved

| Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | Success | Norm precision | Config | Download |
| :-------------: | :-----: | :-----: | :------: | :------------: | :----: | :----: | :------: | :--------: |
| R-50 | - | 20e | 7.54 | 50.0 | 49.9 | 57.9 | [config](siamese_rpn_r50_1x_lasot.py) | [model](https://download.openmmlab.com/mmtracking/sot/siamese_rpn/siamese_rpn_r50_1x_lasot/siamese_rpn_r50_1x_lasot_20201218_051019-3c522eff.pth) | [log](https://download.openmmlab.com/mmtracking/sot/siamese_rpn/siamese_rpn_r50_1x_lasot/siamese_rpn_r50_1x_lasot_20201218_051019.log.json) |
### UAV123

After training the model following [quick_run](https://github.com/open-mmlab/mmtracking/blob/master/docs/quick_run.md#training), you can search the test-time tracking parametes in UAV123 following [here](https://github.com/open-mmlab/mmtracking/blob/master/docs/useful_tools_scripts.md#siameserpn-test-time-parameter-search) to achieve the best results.
JingweiZhang12 marked this conversation as resolved.
Show resolved Hide resolved

We observe around xxx points fluctuations in Success and xxx points fluctuations in Norm percision. We provide the best model.

Note that all of checkpoints from 11-th to 20-th epoch need to be evaluated in order to achieve the best results.
JingweiZhang12 marked this conversation as resolved.
Show resolved Hide resolved

| Dataset | Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | Success | Norm precision | Config | Download |
| :-------------: | :-------------: | :-----: | :-----: | :------: | :------------: | :----: | :----: | :------: | :--------: |
| LaSOT | R-50 | - | 20e | 7.54 | 50.0 | 49.9 | 57.9 | [config](siamese_rpn_r50_1x_lasot.py) | [model](https://download.openmmlab.com/mmtracking/sot/siamese_rpn/siamese_rpn_r50_1x_lasot/siamese_rpn_r50_1x_lasot_20201218_051019-3c522eff.pth) | [log](https://download.openmmlab.com/mmtracking/sot/siamese_rpn/siamese_rpn_r50_1x_lasot/siamese_rpn_r50_1x_lasot_20201218_051019.log.json) |
JingweiZhang12 marked this conversation as resolved.
Show resolved Hide resolved
| UAV123 | R-50 | - | 20e | - | - | 61.8 | 77.3 | [config](siamese_rpn_r50_1x_uav.py) | [model](https://download.openmmlab.com/mmtracking/sot/siamese_rpn/siamese_rpn_r50_1x_lasot/siamese_rpn_r50_1x_lasot_20201218_051019-3c522eff.pth) | [log](https://download.openmmlab.com/mmtracking/sot/siamese_rpn/siamese_rpn_r50_1x_lasot/siamese_rpn_r50_1x_lasot_20201218_051019.log.json) |
JingweiZhang12 marked this conversation as resolved.
Show resolved Hide resolved
17 changes: 17 additions & 0 deletions configs/sot/siamese_rpn/siamese_rpn_r50_1x_uav.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
_base_ = ['./siamese_rpn_r50_1x_lasot.py']
JingweiZhang12 marked this conversation as resolved.
Show resolved Hide resolved

# model settings
model = dict(
test_cfg=dict(rpn=dict(penalty_k=0.01, window_influence=0.02, lr=0.46)))

data_root = 'data/'
# dataset settings
data = dict(
val=dict(
type='UAV123Dataset',
ann_file=data_root + 'uav123/annotations/uav123.json',
img_prefix=data_root + 'uav123/data_seq/UAV123'),
test=dict(
type='UAV123Dataset',
ann_file=data_root + 'uav123/annotations/uav123.json',
img_prefix=data_root + 'uav123/data_seq/UAV123'))
29 changes: 28 additions & 1 deletion docs/dataset.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ This page provides the instructions for dataset preparation on existing benchmar
- [MOT Challenge](https://motchallenge.net/)
- Single Object Tracking
- [LaSOT](http://vision.cs.stonybrook.edu/~lasot/)
- [UAV123](https://cemse.kaust.edu.sa/ivul/uav123/)
JingweiZhang12 marked this conversation as resolved.
Show resolved Hide resolved

### 1. Download Datasets

Expand All @@ -21,7 +22,7 @@ Notes:

- For the training and testing of multi object tracking task, only one of the MOT Challenge dataset (e.g. MOT17) is needed.

- For the training and testing of single object tracking task, the MSCOCO, ILSVRC and LaSOT datasets are needed.
- For the training and testing of single object tracking task, the MSCOCO, ILSVRC, LaSOT and UAV123 datasets are needed.

```
mmtracking
Expand Down Expand Up @@ -62,6 +63,14 @@ mmtracking
| ├── MOT15/MOT16/MOT17/MOT20
| | ├── train
| | ├── test
│ │
│ ├── uav123
JingweiZhang12 marked this conversation as resolved.
Show resolved Hide resolved
│ │ ├── data_seq
│ │ │ ├── UAV123
│ │ │ │ ├── bike1
│ │ │ │ ├── boat1
│ │ ├── anno
│ │ │ ├── UAV123
```

### 2. Convert Annotations
Expand All @@ -83,6 +92,9 @@ python ./tools/convert_datasets/lasot2coco.py -i ./data/lasot/LaSOTTesting -o ./
# The processing of other MOT Challenge dataset is the same as MOT17
python ./tools/convert_datasets/mot2coco.py -i ./data/MOT17/ -o ./data/MOT17/annotations --split-train --convert-det
python ./tools/convert_datasets/mot2reid.py -i ./data/MOT17/ -o ./data/MOT17/reid --val-split 0.2 --vis-threshold 0.3

# UAV123
python ./tools/convert_datasets/uav2coco.py -i ./data/uav123/ -o ./data/uav123/annotations
JingweiZhang12 marked this conversation as resolved.
Show resolved Hide resolved
```

The folder structure will be as following after your run these scripts:
Expand Down Expand Up @@ -132,6 +144,15 @@ mmtracking
| | ├── reid
│ │ │ ├── imgs
│ │ │ ├── meta
│ │
│ ├── uav123
│ │ ├── data_seq
│ │ │ ├── UAV123
│ │ │ │ ├── bike1
│ │ │ │ ├── boat1
│ │ ├── anno (the offical annotation files)
│ │ │ ├── UAV123
│ │ ├── annotations (the converted annotation file)
```

#### The folder of annotations in ILSVRC
Expand Down Expand Up @@ -200,3 +221,9 @@ MOT17-02-FRCNN_000009/000081.jpg 3
For validation, The annotation list `val_20.txt` remains the same as format above.

Images in `reid/imgs` are cropped from raw images in `MOT17/train` by the corresponding `gt.txt`. The value of ground-truth labels should fall in range `[0, num_classes - 1]`.

#### The folder of annotations in UAV123

There are only 1 json files in `data/lasot/annotations`:
JingweiZhang12 marked this conversation as resolved.
Show resolved Hide resolved

`uav123.json`: Json file containing the annotations information of the UAV123 dataset.
2 changes: 1 addition & 1 deletion docs/quick_run.md
Original file line number Diff line number Diff line change
Expand Up @@ -416,7 +416,7 @@ We provide instructions for cutomizing models of different tasks.
### 3. Prepare a config

The next step is to prepare a config thus the dataset or the model can be successfully loaded.
More details about the config system are provided at [tutorials/config.md](https://mmtracking.readthedocs.io/en/latest/tutorials/config.html).
More details about the config system are provided at [tutorials/config.md](https://mmtracking.readthedocs.io/zh_CN/latest/tutorials/config.html).
JingweiZhang12 marked this conversation as resolved.
Show resolved Hide resolved

### 4. Train a new model

Expand Down
11 changes: 11 additions & 0 deletions docs/useful_tools_scripts.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,6 +111,17 @@ python tools/publish_model.py work_dirs/dff_faster_rcnn_r101_dc5_1x_imagenetvid/

The final output filename will be `dff_faster_rcnn_r101_dc5_1x_imagenetvid_20201230-{hash id}.pth`.

### SiameseRPN Test-time Parameter Search

`tools/sot_siamrpn_param_search.py` can search the test-time tracking parameters in SiameseRPN: `penalty_k`, `lr` and `window_influence`. You need to pass the range of each parameter into the argparser.
JingweiZhang12 marked this conversation as resolved.
Show resolved Hide resolved

Example:
JingweiZhang12 marked this conversation as resolved.
Show resolved Hide resolved

```shell
python tools/sot_siamrpn_param_search.py [${CONFIG}] [--checkpoint ${CHECKPOINT}] [--penalty-k-range 0.05,0.5,0.05]
JingweiZhang12 marked this conversation as resolved.
Show resolved Hide resolved
[--lr-range 0.3,0.45,0.02] [--win-infu-range 0.46,0.55,0.02] [--log ${LOG}] [--eval ${EVAL}]
```

## Miscellaneous

### Print the entire config
Expand Down
5 changes: 4 additions & 1 deletion mmtrack/datasets/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,13 @@
from .parsers import CocoVID
from .pipelines import PIPELINES
from .reid_dataset import ReIDDataset
from .sot_test_dataset import SOTTestDataset
from .sot_train_dataset import SOTTrainDataset
from .uav_dataset import UAV123Dataset

__all__ = [
'DATASETS', 'PIPELINES', 'build_dataloader', 'build_dataset', 'CocoVID',
'CocoVideoDataset', 'ImagenetVIDDataset', 'MOTChallengeDataset',
'LaSOTDataset', 'SOTTrainDataset', 'ReIDDataset'
'ReIDDataset', 'SOTTrainDataset', 'SOTTestDataset', 'LaSOTDataset',
'UAV123Dataset'
]
68 changes: 2 additions & 66 deletions mmtrack/datasets/lasot_dataset.py
Original file line number Diff line number Diff line change
@@ -1,24 +1,17 @@
# Copyright (c) OpenMMLab. All rights reserved.
import numpy as np
from mmcv.utils import print_log
from mmdet.datasets import DATASETS

from mmtrack.core.evaluation import eval_sot_ope
from .coco_video_dataset import CocoVideoDataset
from .sot_test_dataset import SOTTestDataset


@DATASETS.register_module()
class LaSOTDataset(CocoVideoDataset):
class LaSOTDataset(SOTTestDataset):
"""LaSOT dataset for the testing of single object tracking.

The dataset doesn't support training mode.
"""

CLASSES = (0, )

def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)

def _parse_ann_info(self, img_info, ann_info):
"""Parse bbox annotations.

Expand All @@ -39,60 +32,3 @@ def _parse_ann_info(self, img_info, ann_info):
ignore = ann_info[0]['full_occlusion'] or ann_info[0]['out_of_view']
ann = dict(bboxes=gt_bboxes, labels=gt_labels, ignore=ignore)
return ann

def evaluate(self, results, metric=['track'], logger=None):
"""Evaluation in OPE protocol.

Args:
results (dict): Testing results of the dataset.
metric (str | list[str]): Metrics to be evaluated. Options are
'track'.
logger (logging.Logger | str | None): Logger used for printing
related information during evaluation. Default: None.

Returns:
dict[str, float]: OPE style evaluation metric (i.e. success,
norm precision and precision).
"""
if isinstance(metric, list):
metrics = metric
elif isinstance(metric, str):
metrics = [metric]
else:
raise TypeError('metric must be a list or a str.')
allowed_metrics = ['track']
for metric in metrics:
if metric not in allowed_metrics:
raise KeyError(f'metric {metric} is not supported.')

eval_results = dict()
if 'track' in metrics:
assert len(self.data_infos) == len(results['track_results'])
print_log('Evaluate OPE Benchmark...', logger=logger)
inds = [
i for i, _ in enumerate(self.data_infos) if _['frame_id'] == 0
]
num_vids = len(inds)
inds.append(len(self.data_infos))

track_bboxes = [
list(
map(lambda x: x[:4],
results['track_results'][inds[i]:inds[i + 1]]))
for i in range(num_vids)
]

ann_infos = [self.get_ann_info(_) for _ in self.data_infos]
ann_infos = [
ann_infos[inds[i]:inds[i + 1]] for i in range(num_vids)
]
track_eval_results = eval_sot_ope(
results=track_bboxes, annotations=ann_infos)
eval_results.update(track_eval_results)

for k, v in eval_results.items():
if isinstance(v, float):
eval_results[k] = float(f'{(v):.3f}')
print_log(eval_results, logger=logger)

return eval_results
97 changes: 97 additions & 0 deletions mmtrack/datasets/sot_test_dataset.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,97 @@
# Copyright (c) OpenMMLab. All rights reserved.
import numpy as np
from mmcv.utils import print_log
from mmdet.datasets import DATASETS

from mmtrack.core.evaluation import eval_sot_ope
from .coco_video_dataset import CocoVideoDataset


@DATASETS.register_module()
class SOTTestDataset(CocoVideoDataset):
"""Dataset for the testing of single object tracking.

The dataset doesn't support training mode.
"""

CLASSES = (0, )

def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)

def _parse_ann_info(self, img_info, ann_info):
"""Parse bbox annotations.

Args:
img_info (dict): image information.
ann_info (list[dict]): Annotation information of an image. Each
image only has one bbox annotation.

Returns:
dict: A dict containing the following keys: bboxes, labels.
labels are not useful in SOT.
"""
gt_bboxes = np.array(ann_info[0]['bbox'], dtype=np.float32)
# convert [x1, y1, w, h] to [x1, y1, x2, y2]
gt_bboxes[2] += gt_bboxes[0]
gt_bboxes[3] += gt_bboxes[1]
gt_labels = np.array(self.cat2label[ann_info[0]['category_id']])
ann = dict(bboxes=gt_bboxes, labels=gt_labels)
JingweiZhang12 marked this conversation as resolved.
Show resolved Hide resolved
return ann

def evaluate(self, results, metric=['track'], logger=None):
"""Evaluation in OPE protocol.

Args:
results (dict): Testing results of the dataset.
metric (str | list[str]): Metrics to be evaluated. Options are
'track'.
logger (logging.Logger | str | None): Logger used for printing
related information during evaluation. Default: None.

Returns:
dict[str, float]: OPE style evaluation metric (i.e. success,
norm precision and precision).
"""
if isinstance(metric, list):
metrics = metric
elif isinstance(metric, str):
metrics = [metric]
else:
raise TypeError('metric must be a list or a str.')
allowed_metrics = ['track']
for metric in metrics:
if metric not in allowed_metrics:
raise KeyError(f'metric {metric} is not supported.')

eval_results = dict()
if 'track' in metrics:
assert len(self.data_infos) == len(results['track_results'])
print_log('Evaluate OPE Benchmark...', logger=logger)
inds = [
i for i, _ in enumerate(self.data_infos) if _['frame_id'] == 0
]
num_vids = len(inds)
inds.append(len(self.data_infos))

track_bboxes = [
list(
map(lambda x: x[:4],
results['track_results'][inds[i]:inds[i + 1]]))
for i in range(num_vids)
]

ann_infos = [self.get_ann_info(_) for _ in self.data_infos]
ann_infos = [
ann_infos[inds[i]:inds[i + 1]] for i in range(num_vids)
]
track_eval_results = eval_sot_ope(
results=track_bboxes, annotations=ann_infos)
eval_results.update(track_eval_results)

for k, v in eval_results.items():
if isinstance(v, float):
eval_results[k] = float(f'{(v):.3f}')
print_log(eval_results, logger=logger)

return eval_results
13 changes: 13 additions & 0 deletions mmtrack/datasets/uav_dataset.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# Copyright (c) OpenMMLab. All rights reserved.
from mmdet.datasets import DATASETS
JingweiZhang12 marked this conversation as resolved.
Show resolved Hide resolved

from .sot_test_dataset import SOTTestDataset


@DATASETS.register_module()
class UAV123Dataset(SOTTestDataset):
"""UAV123 dataset for the testing of single object tracking.

The dataset doesn't support training mode.
"""
pass
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@


@pytest.mark.parametrize('dataset', ['LaSOTDataset'])
def test_lasot_dataset_parse_ann_info(dataset):
def test_sot_dataset_parse_ann_info(dataset):
JingweiZhang12 marked this conversation as resolved.
Show resolved Hide resolved
dataset_class = DATASETS.get(dataset)
JingweiZhang12 marked this conversation as resolved.
Show resolved Hide resolved

dataset = dataset_class(
Expand All @@ -29,7 +29,7 @@ def test_lasot_dataset_parse_ann_info(dataset):
assert ann['labels'] == 0


def test_lasot_evaluation():
def test_sot_ope_evaluation():
dataset_class = DATASETS.get('LaSOTDataset')
JingweiZhang12 marked this conversation as resolved.
Show resolved Hide resolved
dataset = dataset_class(
ann_file=osp.join(LASOT_ANN_PATH, 'lasot_test_dummy.json'),
Expand Down
4 changes: 3 additions & 1 deletion tools/convert_datasets/imagenet2coco_det.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# Copyright (c) OpenMMLab. All rights reserved.
import argparse
import glob
import os
import os.path as osp
import xml.etree.ElementTree as ET
from collections import defaultdict
Expand Down Expand Up @@ -167,7 +168,8 @@ def convert_det(DET, ann_dir, save_dir):
is_vid_train_frame,
records, DET,
obj_num_classes)

if not osp.isdir(save_dir):
os.makedirs(save_dir)
mmcv.dump(DET, osp.join(save_dir, 'imagenet_det_30plus1cls.json'))
print('-----ImageNet DET------')
print(f'total {records["img_id"] - 1} images')
Expand Down
Loading