Skip to content

Commit aba6d62

Browse files
authored
Doc refactor (open-mmlab#311)
* refactor docs * add docs * add modelzoo * refactor getting started
1 parent a5d15ae commit aba6d62

15 files changed

+755
-470
lines changed

docs/dataset_prepare.md

+165
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,165 @@
1+
## Prepare datasets
2+
3+
It is recommended to symlink the dataset root to `$MMSEGMENTATION/data`.
4+
If your folder structure is different, you may need to change the corresponding paths in config files.
5+
6+
```none
7+
mmsegmentation
8+
├── mmseg
9+
├── tools
10+
├── configs
11+
├── data
12+
│ ├── cityscapes
13+
│ │ ├── leftImg8bit
14+
│ │ │ ├── train
15+
│ │ │ ├── val
16+
│ │ ├── gtFine
17+
│ │ │ ├── train
18+
│ │ │ ├── val
19+
│ ├── VOCdevkit
20+
│ │ ├── VOC2012
21+
│ │ │ ├── JPEGImages
22+
│ │ │ ├── SegmentationClass
23+
│ │ │ ├── ImageSets
24+
│ │ │ │ ├── Segmentation
25+
│ │ ├── VOC2010
26+
│ │ │ ├── JPEGImages
27+
│ │ │ ├── SegmentationClassContext
28+
│ │ │ ├── ImageSets
29+
│ │ │ │ ├── SegmentationContext
30+
│ │ │ │ │ ├── train.txt
31+
│ │ │ │ │ ├── val.txt
32+
│ │ │ ├── trainval_merged.json
33+
│ │ ├── VOCaug
34+
│ │ │ ├── dataset
35+
│ │ │ │ ├── cls
36+
│ ├── ade
37+
│ │ ├── ADEChallengeData2016
38+
│ │ │ ├── annotations
39+
│ │ │ │ ├── training
40+
│ │ │ │ ├── validation
41+
│ │ │ ├── images
42+
│ │ │ │ ├── training
43+
│ │ │ │ ├── validation
44+
│ ├── CHASE_DB1
45+
│ │ ├── images
46+
│ │ │ ├── training
47+
│ │ │ ├── validation
48+
│ │ ├── annotations
49+
│ │ │ ├── training
50+
│ │ │ ├── validation
51+
│ ├── DRIVE
52+
│ │ ├── images
53+
│ │ │ ├── training
54+
│ │ │ ├── validation
55+
│ │ ├── annotations
56+
│ │ │ ├── training
57+
│ │ │ ├── validation
58+
│ ├── HRF
59+
│ │ ├── images
60+
│ │ │ ├── training
61+
│ │ │ ├── validation
62+
│ │ ├── annotations
63+
│ │ │ ├── training
64+
│ │ │ ├── validation
65+
│ ├── STARE
66+
│ │ ├── images
67+
│ │ │ ├── training
68+
│ │ │ ├── validation
69+
│ │ ├── annotations
70+
│ │ │ ├── training
71+
│ │ │ ├── validation
72+
73+
```
74+
75+
### Cityscapes
76+
77+
The data could be found [here](https://www.cityscapes-dataset.com/downloads/) after registration.
78+
79+
By convention, `**labelTrainIds.png` are used for cityscapes training.
80+
We provided a [scripts](https://github.com/open-mmlab/mmsegmentation/blob/master/tools/convert_datasets/cityscapes.py) based on [cityscapesscripts](https://github.com/mcordts/cityscapesScripts)
81+
to generate `**labelTrainIds.png`.
82+
83+
```shell
84+
# --nproc means 8 process for conversion, which could be omitted as well.
85+
python tools/convert_datasets/cityscapes.py data/cityscapes --nproc 8
86+
```
87+
88+
### Pascal VOC
89+
90+
Pascal VOC 2012 could be downloaded from [here](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar).
91+
Beside, most recent works on Pascal VOC dataset usually exploit extra augmentation data, which could be found [here](http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/semantic_contours/benchmark.tgz).
92+
93+
If you would like to use augmented VOC dataset, please run following command to convert augmentation annotations into proper format.
94+
95+
```shell
96+
# --nproc means 8 process for conversion, which could be omitted as well.
97+
python tools/convert_datasets/voc_aug.py data/VOCdevkit data/VOCdevkit/VOCaug --nproc 8
98+
```
99+
100+
Please refer to [concat dataset](https://github.com/open-mmlab/mmsegmentation/blob/master/docs/tutorials/new_dataset.md#concatenate-dataset) for details about how to concatenate them and train them together.
101+
102+
### ADE20K
103+
104+
The training and validation set of ADE20K could be download from this [link](http://data.csail.mit.edu/places/ADEchallenge/ADEChallengeData2016.zip).
105+
We may also download test set from [here](http://data.csail.mit.edu/places/ADEchallenge/release_test.zip).
106+
107+
### Pascal Context
108+
109+
The training and validation set of Pascal Context could be download from [here](http://host.robots.ox.ac.uk/pascal/VOC/voc2010/VOCtrainval_03-May-2010.tar). You may also download test set from [here](http://host.robots.ox.ac.uk:8080/eval/downloads/VOC2010test.tar) after registration.
110+
111+
To split the training and validation set from original dataset, you may download trainval_merged.json from [here](https://codalabuser.blob.core.windows.net/public/trainval_merged.json).
112+
113+
If you would like to use Pascal Context dataset, please install [Detail](https://github.com/zhanghang1989/detail-api) and then run the following command to convert annotations into proper format.
114+
115+
```shell
116+
python tools/convert_datasets/pascal_context.py data/VOCdevkit data/VOCdevkit/VOC2010/trainval_merged.json
117+
```
118+
119+
### CHASE DB1
120+
121+
The training and validation set of CHASE DB1 could be download from [here](https://staffnet.kingston.ac.uk/~ku15565/CHASE_DB1/assets/CHASEDB1.zip).
122+
123+
To convert CHASE DB1 dataset to MMSegmentation format, you should run the following command:
124+
125+
```shell
126+
python tools/convert_datasets/chase_db1.py /path/to/CHASEDB1.zip
127+
```
128+
129+
The script will make directory structure automatically.
130+
131+
### DRIVE
132+
133+
The training and validation set of DRIVE could be download from [here](https://drive.grand-challenge.org/). Before that, you should register an account. Currently '1st_manual' is not provided officially.
134+
135+
To convert DRIVE dataset to MMSegmentation format, you should run the following command:
136+
137+
```shell
138+
python tools/convert_datasets/drive.py /path/to/training.zip /path/to/test.zip
139+
```
140+
141+
The script will make directory structure automatically.
142+
143+
### HRF
144+
145+
First, download [healthy.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/healthy.zip), [glaucoma.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/glaucoma.zip), [diabetic_retinopathy.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/diabetic_retinopathy.zip), [healthy_manualsegm.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/healthy_manualsegm.zip), [glaucoma_manualsegm.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/glaucoma_manualsegm.zip) and [diabetic_retinopathy_manualsegm.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/diabetic_retinopathy_manualsegm.zip).
146+
147+
To convert HRF dataset to MMSegmentation format, you should run the following command:
148+
149+
```shell
150+
python tools/convert_datasets/hrf.py /path/to/healthy.zip /path/to/healthy_manualsegm.zip /path/to/glaucoma.zip /path/to/glaucoma_manualsegm.zip /path/to/diabetic_retinopathy.zip /path/to/diabetic_retinopathy_manualsegm.zip
151+
```
152+
153+
The script will make directory structure automatically.
154+
155+
### STARE
156+
157+
First, download [stare-images.tar](http://cecas.clemson.edu/~ahoover/stare/probing/stare-images.tar), [labels-ah.tar](http://cecas.clemson.edu/~ahoover/stare/probing/labels-ah.tar) and [labels-vk.tar](http://cecas.clemson.edu/~ahoover/stare/probing/labels-vk.tar).
158+
159+
To convert STARE dataset to MMSegmentation format, you should run the following command:
160+
161+
```shell
162+
python tools/convert_datasets/stare.py /path/to/stare-images.tar /path/to/labels-ah.tar /path/to/labels-vk.tar
163+
```
164+
165+
The script will make directory structure automatically.

docs/install.md docs/get_started.md

+68-7
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,14 @@
1-
## Requirements
1+
## Prerequisites
22

3-
- Linux or Windows(Experimental)
3+
- Linux or macOS (Windows is in experimental support)
44
- Python 3.6+
5-
- PyTorch 1.3 or higher
6-
- [mmcv](https://github.com/open-mmlab/mmcv)
5+
- PyTorch 1.3+
6+
- CUDA 9.2+ (If you build PyTorch from source, CUDA 9.0 is also compatible)
7+
- GCC 5+
8+
- [MMCV](https://mmcv.readthedocs.io/en/latest/#installation)
9+
10+
Note: You need to run `pip uninstall mmcv` first if you have mmcv installed.
11+
If mmcv and mmcv-full are both installed, there will be `ModuleNotFoundError`.
712

813
## Installation
914

@@ -91,9 +96,9 @@ Note:
9196
5. Some dependencies are optional. Simply running `pip install -e .` will only install the minimum runtime requirements.
9297
To use optional dependencies like `cityscapessripts` either install them manually with `pip install -r requirements/optional.txt` or specify desired extras when calling `pip` (e.g. `pip install -e .[optional]`). Valid keys for the extras field are: `all`, `tests`, `build`, and `optional`.
9398

94-
## A from-scratch setup script
99+
### A from-scratch setup script
95100

96-
### Linux
101+
#### Linux
97102

98103
Here is a full script for setting up mmsegmentation with conda and link the dataset path (supposing that your dataset path is $DATA_ROOT).
99104

@@ -111,7 +116,7 @@ mkdir data
111116
ln -s $DATA_ROOT data
112117
```
113118

114-
### Windows(Experimental)
119+
#### Windows(Experimental)
115120

116121
Here is a full script for setting up mmsegmentation with conda and link the dataset path (supposing that your dataset path is
117122
%DATA_ROOT%. Notice: It must be an absolute path).
@@ -130,3 +135,59 @@ pip install -e . # or "python setup.py develop"
130135

131136
mklink /D data %DATA_ROOT%
132137
```
138+
139+
#### Developing with multiple MMSegmentation versions
140+
141+
The train and test scripts already modify the `PYTHONPATH` to ensure the script use the MMSegmentation in the current directory.
142+
143+
To use the default MMSegmentation installed in the environment rather than that you are working with, you can remove the following line in those scripts
144+
145+
```shell
146+
PYTHONPATH="$(dirname $0)/..":$PYTHONPATH
147+
```
148+
149+
## Verification
150+
151+
To verify whether MMSegmentation and the required environment are installed correctly, we can run sample python codes to initialize a detector and inference a demo image:
152+
153+
```python
154+
from mmseg.apis import inference_segmentor, init_segmentor
155+
import mmcv
156+
157+
config_file = 'configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py'
158+
checkpoint_file = 'checkpoints/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth'
159+
160+
# build the model from a config file and a checkpoint file
161+
model = init_segmentor(config_file, checkpoint_file, device='cuda:0')
162+
163+
# test a single image and show the results
164+
img = 'test.jpg' # or img = mmcv.imread(img), which will only load it once
165+
result = inference_segmentor(model, img)
166+
# visualize the results in a new window
167+
model.show_result(img, result, show=True)
168+
# or save the visualization results to image files
169+
model.show_result(img, result, out_file='result.jpg')
170+
171+
# test a video and show the results
172+
video = mmcv.VideoReader('video.mp4')
173+
for frame in video:
174+
result = inference_segmentor(model, frame)
175+
model.show_result(frame, result, wait_time=1)
176+
```
177+
178+
The above code is supposed to run successfully upon you finish the installation.
179+
180+
We also provide a demo script to test a single image.
181+
182+
```shell
183+
python demo/image_demo.py ${IMAGE_FILE} ${CONFIG_FILE} ${CHECKPOINT_FILE} [--device ${DEVICE_NAME}] [--palette-thr ${PALETTE}]
184+
```
185+
186+
Examples:
187+
188+
```shell
189+
python demo/image_demo.py demo/demo.jpg configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py \
190+
checkpoints/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth --device cuda:0 --palette cityscapes
191+
```
192+
193+
A notebook demo can be found in [demo/inference_demo.ipynb](../demo/inference_demo.ipynb).

0 commit comments

Comments
 (0)