Skip to content

Commit 89f4abe

Browse files
committed
Update Readme with config usage
Add config usage in Readme Update README with config usage instructions
1 parent 98103aa commit 89f4abe

File tree

1 file changed

+65
-1
lines changed

1 file changed

+65
-1
lines changed

README.md

+65-1
Original file line numberDiff line numberDiff line change
@@ -59,11 +59,75 @@ This repository aims at mirroring popular semantic segmentation architectures in
5959
### Data
6060

6161
* Download data for desired dataset(s) from list of URLs [here](https://meetshah1995.github.io/semantic-segmentation/deep-learning/pytorch/visdom/2017/06/01/semantic-segmentation-over-the-years.html#sec_datasets).
62-
* Extract the zip / tar and modify the path appropriately in `config.yaml`
62+
* Extract the zip / tar and modify the path appropriately in your `config.yaml`
6363

6464

6565
### Usage
6666

67+
**Setup config file**
68+
69+
```yaml
70+
# Model Configuration
71+
model:
72+
arch: <name> [options: 'fcn[8,16,32]s, unet, segnet, pspnet, icnet, icnetBN, linknet, frrn[A,B]'
73+
<model_keyarg_1>:<value>
74+
75+
# Data Configuration
76+
data:
77+
dataset: <name> [options: 'pascal, camvid, ade20k, mit_sceneparsing_benchmark, cityscapes, nyuv2, sunrgbd, vistas']
78+
train_split: <split_to_train_on>
79+
val_split: <spit_to_validate_on>
80+
img_rows: 512
81+
img_cols: 1024
82+
path: <path/to/data>
83+
<dataset_keyarg1>:<value>
84+
85+
# Training Configuration
86+
training:
87+
n_workers: 64
88+
train_iters: 35000
89+
batch_size: 16
90+
val_interval: 500
91+
print_interval: 25
92+
loss:
93+
name: <loss_type> [options: 'cross_entropy, bootstrapped_cross_entropy, multi_scale_crossentropy']
94+
<loss_keyarg1>:<value>
95+
96+
# Optmizer Configuration
97+
optimizer:
98+
name: <optimizer_name> [options: 'sgd, adam, adamax, asgd, adadelta, adagrad, rmsprop']
99+
lr: 1.0e-3
100+
<optimizer_keyarg1>:<value>
101+
102+
# Warmup LR Configuration
103+
warmup_iters: <iters for lr warmup>
104+
mode: <'constant' or 'linear' for warmup'>
105+
gamma: <gamma for warm up>
106+
107+
# Augmentations Configuration
108+
augmentations:
109+
gamma: x #[gamma varied in 1 to 1+x]
110+
hue: x #[hue varied in -x to x]
111+
brightness: x #[brightness varied in 1-x to 1+x]
112+
saturation: x #[saturation varied in 1-x to 1+x]
113+
contrast: x #[contrast varied in 1-x to 1+x]
114+
rcrop: [h, w] #[crop of size (h,w)]
115+
translate: [dh, dw] #[reflective translation by (dh, dw)]
116+
rotate: d #[rotate -d to d degrees]
117+
scale: [h,w] #[scale to size (h,w)]
118+
ccrop: [h,w] #[center crop of (h,w)]
119+
hflip: p #[flip horizontally with chance p]
120+
vflip: p #[flip vertically with chance p]
121+
122+
# LR Schedule Configuration
123+
lr_schedule:
124+
name: <schedule_type> [options: 'constant_lr, poly_lr, multi_step, cosine_annealing, exp_lr']
125+
<scheduler_keyarg1>:<value>
126+
127+
# Resume from checkpoint
128+
resume: <path_to_checkpoint>
129+
```
130+
67131
**To train the model :**
68132
69133
```

0 commit comments

Comments
 (0)