Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Add SOD datasets #913

Closed
wants to merge 10 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
56 changes: 56 additions & 0 deletions configs/_base_/datasets/duts.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
# dataset settings
dataset_type = 'DUTSDataset'
data_root = 'data/DUTS'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
img_scale = (352, 352)
crop_size = (320, 320)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(type='Resize', img_scale=img_scale, ratio_range=(0.5, 2.0)),
dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75),
dict(type='RandomFlip', prob=0.5),
dict(type='PhotoMetricDistortion'),
dict(type='Normalize', **img_norm_cfg),
dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_semantic_seg'])
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=img_scale,
# img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75, 2.0],
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(type='Normalize', **img_norm_cfg),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
])
]

data = dict(
samples_per_gpu=4,
workers_per_gpu=4,
train=dict(
type=dataset_type,
data_root=data_root,
img_dir='images/training',
ann_dir='annotations/training',
pipeline=train_pipeline),
val=dict(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use concat dataset (#833) and do evaluation seperately.

type=dataset_type,
data_root=data_root,
img_dir='images/validation',
ann_dir='annotations/validation',
pipeline=test_pipeline),
test=dict(
type=dataset_type,
data_root=data_root,
img_dir='images/validation',
ann_dir='annotations/validation',
pipeline=test_pipeline))
68 changes: 68 additions & 0 deletions docs/dataset_prepare.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,6 +108,28 @@ mmsegmentation
| | └── leftImg8bit
| | | └── test
| | | └── night
| ├── DUTS
│ │ ├── images
│ │ │ ├── training
│ │ │ ├── validation
│ │ ├── annotations
│ │ │ ├── training
│ │ │ ├── validation
| ├── DUT-OMRON
│ │ ├── images
│ │ │ ├── validation
│ │ ├── annotations
│ │ │ ├── validation
| ├── ECSSD
│ │ ├── images
│ │ │ ├── validation
│ │ ├── annotations
│ │ │ ├── validation
| ├── HKU-IS
│ │ ├── images
│ │ │ ├── validation
│ │ ├── annotations
│ │ │ ├── validation
```

### Cityscapes
Expand Down Expand Up @@ -253,3 +275,49 @@ Since we only support test models on this dataset, you may only download [the va
### Nighttime Driving

Since we only support test models on this dataset, you may only download [the test set](http://data.vision.ee.ethz.ch/daid/NighttimeDriving/NighttimeDrivingTest.zip).

### DUTS

First,download [DUTS-TR.zip](http://saliencydetection.net/duts/download/DUTS-TR.zip) and [DUTS-TE.zip](http://saliencydetection.net/duts/download/DUTS-TE.zip) .

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Three ) . in this file to ).


To convert DUTS dataset to MMSegmentation format, you should run the following command:

```shell
python tools/convert_datasets/duts.py /path/to/DUTS-TR.zip /path/to/DUTS-TE.zip
```

### DUT-OMRON

In salient object detection (SOD), DUT-OMRON is used for evaluation.

First,download [DUT-OMRON-image.zip](http://saliencydetection.net/dut-omron/download/DUT-OMRON-image.zip) and [DUT-OMRON-gt-pixelwise.zip.zip](http://saliencydetection.net/dut-omron/download/DUT-OMRON-gt-pixelwise.zip.zip) .

To convert DUT-OMRON dataset to MMSegmentation format, you should run the following command:

```shell
python tools/convert_datasets/dut_omron.py /path/to/DUT-OMRON-image.zip /path/to/DUT-OMRON-gt-pixelwise.zip.zip
```

### ECSSD

In salient object detection (SOD), ECSSD is used for evaluation.

First,download [images.zip](https://www.cse.cuhk.edu.hk/leojia/projects/hsaliency/data/ECSSD/images.zip) and [ground_truth_mask.zip](https://www.cse.cuhk.edu.hk/leojia/projects/hsaliency/data/ECSSD/ground_truth_mask.zip) .

To convert ECSSD dataset to MMSegmentation format, you should run the following command:

```shell
python tools/convert_datasets/ecssd.py /path/to/images.zip /path/to/ground_truth_mask.zip
```

### HKU-IS

In salient object detection (SOD), HKU-IS is used for evaluation.

First,download [HKU-IS.rar](https://sites.google.com/site/ligb86/mdfsaliency/).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


To convert HKU-IS dataset to MMSegmentation format, you should run the following command:

```shell
python tools/convert_datasets/hku_is.py /path/to/HKU-IS.rar
```
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could modify the Chinese document accordingly.

68 changes: 68 additions & 0 deletions docs_zh-CN/dataset_prepare.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,28 @@ mmsegmentation
| | └── leftImg8bit
| | | └── test
| | | └── night
| ├── DUTS
│ │ ├── images
│ │ │ ├── training
│ │ │ ├── validation
│ │ ├── annotations
│ │ │ ├── training
│ │ │ ├── validation
| ├── DUT-OMRON
│ │ ├── images
│ │ │ ├── validation
│ │ ├── annotations
│ │ │ ├── validation
| ├── ECSSD
│ │ ├── images
│ │ │ ├── validation
│ │ ├── annotations
│ │ │ ├── validation
| ├── HKU-IS
│ │ ├── images
│ │ │ ├── validation
│ │ ├── annotations
│ │ │ ├── validation
```

### Cityscapes
Expand Down Expand Up @@ -195,3 +217,49 @@ python tools/convert_datasets/stare.py /path/to/stare-images.tar /path/to/labels
### Nighttime Driving

因为我们只支持在此数据集上测试模型,所以您只需下载[测试集](http://data.vision.ee.ethz.ch/daid/NighttimeDriving/NighttimeDrivingTest.zip)。

### DUTS

首先,下载 [DUTS-TR.zip](http://saliencydetection.net/duts/download/DUTS-TR.zip) 和 [DUTS-TE.zip](http://saliencydetection.net/duts/download/DUTS-TE.zip) 。

为了将 DUTS 数据集转换成 MMSegmentation 格式,您需要运行如下命令:

```shell
python tools/convert_datasets/duts.py /path/to/DUTS-TR.zip /path/to/DUTS-TE.zip
```

### DUT-OMRON

显著性检测(SOD)任务中 DUT-OMRON 仅作为测试集。

首先,下载 [DUT-OMRON-image.zip](http://saliencydetection.net/dut-omron/download/DUT-OMRON-image.zip) 和 [DUT-OMRON-gt-pixelwise.zip.zip](http://saliencydetection.net/dut-omron/download/DUT-OMRON-gt-pixelwise.zip.zip) 。

为了将 DUT-OMRON 数据集转换成 MMSegmentation 格式,您需要运行如下命令:

```shell
python tools/convert_datasets/dut_omron.py /path/to/DUT-OMRON-image.zip /path/to/DUT-OMRON-gt-pixelwise.zip.zip
```

### ECSSD

显著性检测(SOD)任务中 ECSSD 仅作为测试集。

首先,下载 [images.zip](https://www.cse.cuhk.edu.hk/leojia/projects/hsaliency/data/ECSSD/images.zip) 和 [ground_truth_mask.zip](https://www.cse.cuhk.edu.hk/leojia/projects/hsaliency/data/ECSSD/ground_truth_mask.zip) 。

为了将 ECSSD 数据集转换成 MMSegmentation 格式,您需要运行如下命令:

```shell
python tools/convert_datasets/ecssd.py /path/to/images.zip /path/to/ground_truth_mask.zip
```

### HKU-IS

显著性检测(SOD)任务中 HKU-IS 仅作为测试集。

首先,下载 [HKU-IS.rar](https://sites.google.com/site/ligb86/mdfsaliency/) 。

为了将 HKU-IS 数据集转换成 MMSegmentation 格式,您需要运行如下命令:

```shell
python tools/convert_datasets/hku_is.py /path/to/HKU-IS.rar
```
17 changes: 13 additions & 4 deletions mmseg/apis/test.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ def single_gpu_test(model,
efficient_test=False,
opacity=0.5,
pre_eval=False,
return_logit=False,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

docstring for this new argument.

format_only=False,
format_args={}):
"""Test with single GPU by progressive mode.
Expand Down Expand Up @@ -88,7 +89,8 @@ def single_gpu_test(model,

for batch_indices, data in zip(loader_indices, data_loader):
with torch.no_grad():
result = model(return_loss=False, **data)
result = model(
return_loss=False, return_logit=return_logit, **data)

if efficient_test:
result = [np2tmp(_, tmpdir='.efficient_test') for _ in result]
Expand All @@ -99,7 +101,8 @@ def single_gpu_test(model,
if pre_eval:
# TODO: adapt samples_per_gpu > 1.
# only samples_per_gpu=1 valid now
result = dataset.pre_eval(result, indices=batch_indices)
result = dataset.pre_eval(
result, return_logit, indices=batch_indices)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

return_logit=return_logit


results.extend(result)

Expand Down Expand Up @@ -142,6 +145,7 @@ def multi_gpu_test(model,
gpu_collect=False,
efficient_test=False,
pre_eval=False,
return_logit=False,
format_only=False,
format_args={}):
"""Test model with multiple gpus by progressive mode.
Expand Down Expand Up @@ -204,7 +208,11 @@ def multi_gpu_test(model,

for batch_indices, data in zip(loader_indices, data_loader):
with torch.no_grad():
result = model(return_loss=False, rescale=True, **data)
result = model(
return_loss=False,
return_logit=return_logit,
rescale=True,
**data)

if efficient_test:
result = [np2tmp(_, tmpdir='.efficient_test') for _ in result]
Expand All @@ -215,7 +223,8 @@ def multi_gpu_test(model,
if pre_eval:
# TODO: adapt samples_per_gpu > 1.
# only samples_per_gpu=1 valid now
result = dataset.pre_eval(result, indices=batch_indices)
result = dataset.pre_eval(
result, return_logit, indices=batch_indices)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

return_logit=return_logit


results.extend(result)

Expand Down
5 changes: 3 additions & 2 deletions mmseg/core/evaluation/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,11 @@
from .class_names import get_classes, get_palette
from .eval_hooks import DistEvalHook, EvalHook
from .metrics import (eval_metrics, intersect_and_union, mean_dice,
mean_fscore, mean_iou, pre_eval_to_metrics)
mean_fscore, mean_iou, pre_eval_to_metrics,
pre_eval_to_sod_metrics, eval_sod_metrics, calc_sod_metrics)

__all__ = [
'EvalHook', 'DistEvalHook', 'mean_dice', 'mean_iou', 'mean_fscore',
'eval_metrics', 'get_classes', 'get_palette', 'pre_eval_to_metrics',
'intersect_and_union'
'intersect_and_union', 'calc_sod_metrics', 'eval_sod_metrics', 'pre_eval_to_sod_metrics'

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

]
13 changes: 11 additions & 2 deletions mmseg/core/evaluation/eval_hooks.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,9 +30,11 @@ def __init__(self,
by_epoch=False,
efficient_test=False,
pre_eval=False,
return_logit=False,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add docstr for this argument

**kwargs):
super().__init__(*args, by_epoch=by_epoch, **kwargs)
self.pre_eval = pre_eval
self.return_logit = return_logit
if efficient_test:
warnings.warn(
'DeprecationWarning: ``efficient_test`` for evaluation hook '
Expand All @@ -47,7 +49,11 @@ def _do_evaluate(self, runner):

from mmseg.apis import single_gpu_test
results = single_gpu_test(
runner.model, self.dataloader, show=False, pre_eval=self.pre_eval)
runner.model,
self.dataloader,
show=False,
pre_eval=self.pre_eval,
return_logit=self.return_logit)
runner.log_buffer.clear()
runner.log_buffer.output['eval_iter_num'] = len(self.dataloader)
key_score = self.evaluate(runner, results)
Expand Down Expand Up @@ -77,9 +83,11 @@ def __init__(self,
by_epoch=False,
efficient_test=False,
pre_eval=False,
return_logit=False,
**kwargs):
super().__init__(*args, by_epoch=by_epoch, **kwargs)
self.pre_eval = pre_eval
self.return_logit = return_logit
if efficient_test:
warnings.warn(
'DeprecationWarning: ``efficient_test`` for evaluation hook '
Expand Down Expand Up @@ -115,7 +123,8 @@ def _do_evaluate(self, runner):
self.dataloader,
tmpdir=tmpdir,
gpu_collect=self.gpu_collect,
pre_eval=self.pre_eval)
pre_eval=self.pre_eval,
return_logit=self.return_logit)

runner.log_buffer.clear()

Expand Down
Loading