Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(OLD) [Feature] Support LoveDA dataset #1006

Closed
wants to merge 22 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
22 commits
Select commit Hold shift + click to select a range
d26173e
update LoveDA dataset api
Junjue-Wang Oct 31, 2021
872a284
revised lint errors in dataset_prepare.md
Junjue-Wang Oct 31, 2021
bcd670c
revised lint errors in loveda.py
Junjue-Wang Oct 31, 2021
349fc2d
[Fix] Change `self.loss_decode` back to `dict` in Single Loss situati…
MengzhangLI Nov 1, 2021
ddce375
smaller input & channels of unittest (#1004)
MengzhangLI Nov 1, 2021
54435fb
[Feature] Support TIMMBackbone (#998)
Junjun2016 Nov 2, 2021
5049651
Bump v0.19.0 (#1009)
Junjun2016 Nov 2, 2021
14dc00a
delete benchmark_new.py (#1012)
MengzhangLI Nov 3, 2021
7a1c9a5
[Fix] Fix the bug that vit cannot load pretrain properly when using i…
RockeyCoss Nov 3, 2021
b2744ca
revised lint errors in loveda.py
Junjue-Wang Nov 4, 2021
ed6cb48
revised lint errors in dataset_prepare.md
Junjue-Wang Nov 5, 2021
2289e27
revised lint errors in dataset_prepare.md
Junjue-Wang Nov 5, 2021
159db2f
checked with isort and yapf
Junjue-Wang Nov 6, 2021
686a51d
checked with isort and yapf
Junjue-Wang Nov 6, 2021
b877e12
checked with isort and yapf
Junjue-Wang Nov 7, 2021
a116034
Revert "checked with isort and yapf"
Junjue-Wang Nov 8, 2021
6c2fccc
Revert "checked with isort and yapf"
Junjue-Wang Nov 8, 2021
78e21cc
Revert "revised lint errors in dataset_prepare.md"
Junjue-Wang Nov 8, 2021
020acda
Revert "checked with isort and yapf"
Junjue-Wang Nov 8, 2021
db3ee5a
Revert "checked with isort and yapf"
Junjue-Wang Nov 8, 2021
d55b139
Merge branch 'LoveDA' of github.com:Junjue-Wang/mmsegmentation into L…
MengzhangLI Nov 8, 2021
a493897
add configs & fix bugs
MengzhangLI Nov 9, 2021
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 17 additions & 0 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -71,9 +71,17 @@ jobs:
run: rm -rf .eggs && pip install -e .
- name: Run unittests and generate coverage report
run: |
pip install timm
coverage run --branch --source mmseg -m pytest tests/
coverage xml
coverage report -m
if: ${{matrix.torch >= '1.5.0'}}
- name: Skip timm unittests and generate coverage report
run: |
coverage run --branch --source mmseg -m pytest tests/ --ignore tests/test_models/test_backbones/test_timm_backbone.py
coverage xml
coverage report -m
if: ${{matrix.torch < '1.5.0'}}

build_cuda101:
runs-on: ubuntu-18.04
Expand Down Expand Up @@ -142,9 +150,17 @@ jobs:
TORCH_CUDA_ARCH_LIST=7.0 pip install .
- name: Run unittests and generate coverage report
run: |
python -m pip install timm
coverage run --branch --source mmseg -m pytest tests/
coverage xml
coverage report -m
if: ${{matrix.torch >= '1.5.0'}}
- name: Skip timm unittests and generate coverage report
run: |
coverage run --branch --source mmseg -m pytest tests/ --ignore tests/test_models/test_backbones/test_timm_backbone.py
coverage xml
coverage report -m
if: ${{matrix.torch < '1.5.0'}}
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v1.0.10
with:
Expand Down Expand Up @@ -198,6 +214,7 @@ jobs:
TORCH_CUDA_ARCH_LIST=7.0 pip install .
- name: Run unittests and generate coverage report
run: |
python -m pip install timm
coverage run --branch --source mmseg -m pytest tests/
coverage xml
coverage report -m
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ This project is released under the [Apache 2.0 license](LICENSE).

## Changelog

v0.18.0 was released in 10/07/2021.
v0.19.0 was released in 11/02/2021.
Please refer to [changelog.md](docs/changelog.md) for details and release history.

## Benchmark and model zoo
Expand Down
2 changes: 1 addition & 1 deletion README_zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ MMSegmentation 是一个基于 PyTorch 的语义分割开源工具箱。它是 O

## 更新日志

最新的月度版本 v0.18.0 在 2021.10.07 发布。
最新的月度版本 v0.19.0 在 2021.11.2 发布。
如果想了解更多版本更新细节和历史信息,请阅读[更新日志](docs/changelog.md)。

## 基准测试和模型库
Expand Down
54 changes: 54 additions & 0 deletions configs/_base_/datasets/loveda.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
# dataset settings
dataset_type = 'LoveDADataset'
data_root = 'data/loveDA'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
crop_size = (512, 512)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', reduce_zero_label=True),
dict(type='Resize', img_scale=(2048, 512), ratio_range=(0.5, 2.0)),
dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75),
dict(type='RandomFlip', prob=0.5),
dict(type='PhotoMetricDistortion'),
dict(type='Normalize', **img_norm_cfg),
dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_semantic_seg']),
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(1024, 1024),
# img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75],
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(type='Normalize', **img_norm_cfg),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img']),
])
]
data = dict(
samples_per_gpu=4,
workers_per_gpu=4,
train=dict(
type=dataset_type,
data_root=data_root,
img_dir='img_dir/train',
ann_dir='ann_dir/train',
pipeline=train_pipeline),
val=dict(
type=dataset_type,
data_root=data_root,
img_dir='img_dir/val',
ann_dir='ann_dir/val',
pipeline=test_pipeline),
test=dict(
type=dataset_type,
data_root=data_root,
img_dir='img_dir/test',
ann_dir=None,
pipeline=test_pipeline))
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
_base_ = './deeplabv3plus_r50-d8_512x512_80k_loveda.py'
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
11 changes: 11 additions & 0 deletions configs/deeplabv3plus/deeplabv3plus_r18-d8_512x512_80k_loveda.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
_base_ = './deeplabv3plus_r50-d8_512x512_80k_loveda.py'
model = dict(
pretrained='open-mmlab://resnet18_v1c',
backbone=dict(depth=18),
decode_head=dict(
c1_in_channels=64,
c1_channels=12,
in_channels=512,
channels=128,
),
auxiliary_head=dict(in_channels=256, channels=64))
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
_base_ = [
'../_base_/models/deeplabv3plus_r50-d8.py', '../_base_/datasets/loveda.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
]
model = dict(
decode_head=dict(num_classes=7), auxiliary_head=dict(num_classes=7))
4 changes: 4 additions & 0 deletions configs/hrnet/fcn_hr18_512x512_80k_loveda.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
_base_ = [
'../_base_/models/fcn_hr18.py', '../_base_/datasets/loveda.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
]
10 changes: 10 additions & 0 deletions configs/hrnet/fcn_hr18s_512x512_80k_loveda.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
_base_ = './fcn_hr18_512x512_80k_loveda.py'
model = dict(
# pretrained='open-mmlab://msra/hrnetv2_w18_small',
pretrained='./pretrained/hrnetv2_w18_small-b5a04e21.pth',
backbone=dict(
extra=dict(
stage1=dict(num_blocks=(2, )),
stage2=dict(num_blocks=(2, 2)),
stage3=dict(num_modules=3, num_blocks=(2, 2, 2)),
stage4=dict(num_modules=2, num_blocks=(2, 2, 2, 2)))))
11 changes: 11 additions & 0 deletions configs/hrnet/fcn_hr48_512x512_80k_loveda.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
_base_ = './fcn_hr18_512x512_80k_loveda.py'
model = dict(
# pretrained='open-mmlab://msra/hrnetv2_w48',
pretrained='./pretrained/hrnetv2_w48-d2186c55.pth',
backbone=dict(
extra=dict(
stage2=dict(num_channels=(48, 96)),
stage3=dict(num_channels=(48, 96, 192)),
stage4=dict(num_channels=(48, 96, 192, 384)))),
decode_head=dict(
in_channels=[48, 96, 192, 384], channels=sum([48, 96, 192, 384])))
2 changes: 2 additions & 0 deletions configs/pspnet/pspnet_r101-d8_512x512_80k_loveda.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
_base_ = './pspnet_r50-d8_512x512_80k_loveda.py'
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
9 changes: 9 additions & 0 deletions configs/pspnet/pspnet_r18-d8_512x512_80k_loveda.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
_base_ = './pspnet_r50-d8_512x512_80k_loveda.py'
model = dict(
pretrained='open-mmlab://resnet18_v1c',
backbone=dict(depth=18),
decode_head=dict(
in_channels=512,
channels=128,
),
auxiliary_head=dict(in_channels=256, channels=64))
6 changes: 6 additions & 0 deletions configs/pspnet/pspnet_r50-d8_512x512_80k_loveda.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
_base_ = [
'../_base_/models/pspnet_r50-d8.py', '../_base_/datasets/loveda.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
]
model = dict(
decode_head=dict(num_classes=7), auxiliary_head=dict(num_classes=7))
40 changes: 40 additions & 0 deletions docs/changelog.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,45 @@
## Changelog

### V0.19 (11/02/2021)

**Highlights**

- Support TIMMBackbone wrapper ([#998](https://github.com/open-mmlab/mmsegmentation/pull/998))
- Support custom hook ([#428](https://github.com/open-mmlab/mmsegmentation/pull/428))
- Add codespell pre-commit hook ([#920](https://github.com/open-mmlab/mmsegmentation/pull/920))
- Add FastFCN benchmark on ADE20K ([#972](https://github.com/open-mmlab/mmsegmentation/pull/972))

**New Features**

- Support TIMMBackbone wrapper ([#998](https://github.com/open-mmlab/mmsegmentation/pull/998))
- Support custom hook ([#428](https://github.com/open-mmlab/mmsegmentation/pull/428))
- Add FastFCN benchmark on ADE20K ([#972](https://github.com/open-mmlab/mmsegmentation/pull/972))
- Add codespell pre-commit hook and fix typos ([#920](https://github.com/open-mmlab/mmsegmentation/pull/920))

**Improvements**

- Make inputs & channels smaller in unittests ([#1004](https://github.com/open-mmlab/mmsegmentation/pull/1004))
- Change `self.loss_decode` back to `dict` in Single Loss situation ([#1002](https://github.com/open-mmlab/mmsegmentation/pull/1002))

**Bug Fixes**

- Fix typo in usage example ([#1003](https://github.com/open-mmlab/mmsegmentation/pull/1003))
- Add contiguous after permutation in ViT ([#992](https://github.com/open-mmlab/mmsegmentation/pull/992))
- Fix the invalid link ([#985](https://github.com/open-mmlab/mmsegmentation/pull/985))
- Fix bug in CI with python 3.9 ([#994](https://github.com/open-mmlab/mmsegmentation/pull/994))
- Fix bug when loading class name form file in custom dataset ([#923](https://github.com/open-mmlab/mmsegmentation/pull/923))

**Contributors**

- @ShoupingShan made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/923
- @RockeyCoss made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/954
- @HarborYuan made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/992
- @lkm2835 made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/1003
- @gszh made their first contribution in https://github.com/open-mmlab/mmsegmentation/pull/428
- @VVsssssk
- @MengzhangLI
- @Junjun2016

### V0.18 (10/07/2021)

**Highlights**
Expand Down
30 changes: 30 additions & 0 deletions docs/dataset_prepare.md
Original file line number Diff line number Diff line change
Expand Up @@ -253,3 +253,33 @@ Since we only support test models on this dataset, you may only download [the va
### Nighttime Driving

Since we only support test models on this dataset, you may only download [the test set](http://data.vision.ee.ethz.ch/daid/NighttimeDriving/NighttimeDrivingTest.zip).

### LoveDA

The data could be downloaded [here](https://drive.google.com/drive/folders/1ibYV0qwn4yuuh068Rnc-w4tPi0U0c-ti?usp=sharing).

For LoveDA dataset, please run the following command to download and re-organize the dataset.

```shell
# download
mkdir loveda && cd loveda
wget https://drive.google.com/drive/folders/1ibYV0qwn4yuuh068Rnc-w4tPi0U0c-ti?usp=sharing

# unzip
unzip '*.zip'

# Convert into segmentation splits
mkdir -p img_dir/train img_dir/val img_dir/test ann_dir/train ann_dir/val
mv Train/Rural/images_png/* img_dir/train
mv Train/Urban/images_png/* img_dir/train
mv Val/Rural/images_png/* img_dir/val
mv Val/Uban/images_png/* img_dir/val
mv Test/Rural/images_png/* img_dir/val
mv Test/Uban/images_png/* img_dir/val
mv Train/Rural/masks_png/* ann_dir/train
mv Train/Uban/masks_png/* ann_dir/train
mv Val/Rural/masks_png/* ann_dir/val
mv Val/Uban/masks_png/* ann_dir/val
```

More details about LoveDA can be found [here](https://github.com/Junjue-Wang/LoveDA).
1 change: 1 addition & 0 deletions docs/get_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ The compatible MMSegmentation and MMCV versions are as below. Please install the
| MMSegmentation version | MMCV version |
|:-------------------:|:-------------------:|
| master | mmcv-full>=1.3.13, <1.4.0 |
| 0.19.0 | mmcv-full>=1.3.13, <1.4.0 |
| 0.18.0 | mmcv-full>=1.3.13, <1.4.0 |
| 0.17.0 | mmcv-full>=1.3.7, <1.4.0 |
| 0.16.0 | mmcv-full>=1.3.7, <1.4.0 |
Expand Down
34 changes: 34 additions & 0 deletions docs_zh-CN/dataset_prepare.md
Original file line number Diff line number Diff line change
Expand Up @@ -195,3 +195,37 @@ python tools/convert_datasets/stare.py /path/to/stare-images.tar /path/to/labels
### Nighttime Driving

因为我们只支持在此数据集上测试模型,所以您只需下载[测试集](http://data.vision.ee.ethz.ch/daid/NighttimeDriving/NighttimeDrivingTest.zip)。

### LoveDA

下载[LoveDA数据集](https://drive.google.com/drive/folders/1ibYV0qwn4yuuh068Rnc-w4tPi0U0c-ti?usp=sharing)。

对于 LoveDA 数据集,请运行以下命令下载并组织数据集

```shell

# download

mkdir loveda && cd loveda
wget https://drive.google.com/drive/folders/1ibYV0qwn4yuuh068Rnc-w4tPi0U0c-ti?usp=sharing

# unzip

unzip '*.zip'

# Convert into segmentation splits

mkdir -p img_dir/train img_dir/val img_dir/test ann_dir/train ann_dir/val
mv Train/Rural/images_png/* img_dir/train
mv Train/Urban/images_png/* img_dir/train
mv Val/Rural/images_png/* img_dir/val
mv Val/Urban/images_png/* img_dir/val
mv Test/Rural/images_png/* img_dir/val
mv Test/Urban/images_png/* img_dir/val
mv Train/Rural/masks_png/* ann_dir/train
mv Train/Urban/masks_png/* ann_dir/train
mv Val/Rural/masks_png/* ann_dir/val
mv Val/Urban/masks_png/* ann_dir/val
```

关于 LoveDA 的更多细节可以在[这里](https://github.com/Junjue-Wang/LoveDA)找到。
1 change: 1 addition & 0 deletions docs_zh-CN/get_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@
| MMSegmentation 版本 | MMCV 版本 |
|:-------------------:|:-------------------:|
| master | mmcv-full>=1.3.13, <1.4.0 |
| 0.19.0 | mmcv-full>=1.3.13, <1.4.0 |
| 0.18.0 | mmcv-full>=1.3.13, <1.4.0 |
| 0.17.0 | mmcv-full>=1.3.7, <1.4.0 |
| 0.16.0 | mmcv-full>=1.3.7, <1.4.0 |
Expand Down
8 changes: 7 additions & 1 deletion mmseg/core/seg/sampler/ohem_pixel_sampler.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
# Copyright (c) OpenMMLab. All rights reserved.
import torch
import torch.nn as nn
import torch.nn.functional as F

from ..builder import PIXEL_SAMPLERS
Expand Down Expand Up @@ -62,14 +63,19 @@ def sample(self, seg_logit, seg_label):
threshold = max(min_threshold, self.thresh)
valid_seg_weight[seg_prob[valid_mask] < threshold] = 1.
else:
if not isinstance(self.context.loss_decode, nn.ModuleList):
losses_decode = [self.context.loss_decode]
else:
losses_decode = self.context.loss_decode
losses = 0.0
for loss_module in self.context.loss_decode:
for loss_module in losses_decode:
losses += loss_module(
seg_logit,
seg_label,
weight=None,
ignore_index=self.context.ignore_index,
reduction_override='none')

# faster than topk according to https://github.com/pytorch/pytorch/issues/22812 # noqa
_, sort_indices = losses[valid_mask].sort(descending=True)
valid_seg_weight[sort_indices[:batch_kept]] = 1.
Expand Down
27 changes: 21 additions & 6 deletions mmseg/datasets/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,16 +9,31 @@
from .dataset_wrappers import ConcatDataset, RepeatDataset
from .drive import DRIVEDataset
from .hrf import HRFDataset
from .loveda import LoveDADataset
from .night_driving import NightDrivingDataset
from .pascal_context import PascalContextDataset, PascalContextDataset59
from .stare import STAREDataset
from .voc import PascalVOCDataset

__all__ = [
'CustomDataset', 'build_dataloader', 'ConcatDataset', 'RepeatDataset',
'DATASETS', 'build_dataset', 'PIPELINES', 'CityscapesDataset',
'PascalVOCDataset', 'ADE20KDataset', 'PascalContextDataset',
'PascalContextDataset59', 'ChaseDB1Dataset', 'DRIVEDataset', 'HRFDataset',
'STAREDataset', 'DarkZurichDataset', 'NightDrivingDataset',
'COCOStuffDataset'
'CustomDataset',
'build_dataloader',
'ConcatDataset',
'RepeatDataset',
'DATASETS',
'build_dataset',
'PIPELINES',
'CityscapesDataset',
'PascalVOCDataset',
'ADE20KDataset',
'PascalContextDataset',
'PascalContextDataset59',
'ChaseDB1Dataset',
'DRIVEDataset',
'HRFDataset',
'STAREDataset',
'DarkZurichDataset',
'NightDrivingDataset',
'COCOStuffDataset',
'LoveDADataset',
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
'LoveDADataset',
'LoveDADataset'

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MengzhangLI What should we do? Create a new PR for what?

]
Loading