Skip to content

Commit

Permalink
[Enhancement] Add codespell pre-commit hook and fix typos (#920)
Browse files Browse the repository at this point in the history
* add codespell pre-commit hook and fix typos

* Update mmseg/models/decode_heads/dpt_head.py

* Update mmseg/models/backbones/vit.py

* Update mmseg/models/backbones/vit.py

* fix typos

* skip formating typo

* deprecate formating

* skip ipynb

* unstage ipynb changes

* unstage ipynb changes

* fix typos in ipynb

* unstage ipynb changes
  • Loading branch information
Junjun2016 authored Oct 13, 2021
1 parent d4d64eb commit 54bd4bd
Show file tree
Hide file tree
Showing 23 changed files with 352 additions and 333 deletions.
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,4 @@ blank_issues_enabled: false
contact_links:
- name: MMSegmentation Documentation
url: https://mmsegmentation.readthedocs.io
about: Check the docs and FAQ to see if you question is already anwsered.
about: Check the docs and FAQ to see if you question is already answered.
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/error-report.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ A clear and concise description of what the bug is.

**Environment**

1. Please run `python mmseg/utils/collect_env.py` to collect necessary environment infomation and paste it here.
1. Please run `python mmseg/utils/collect_env.py` to collect necessary environment information and paste it here.
2. You may add addition that may be helpful for locating the problem, such as
- How you installed PyTorch [e.g., pip, conda, source]
- Other environment variables that may be related (such as `$PATH`, `$LD_LIBRARY_PATH`, `$PYTHONPATH`, etc.)
Expand Down
4 changes: 4 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,10 @@ repos:
hooks:
- id: markdownlint
args: ["-r", "~MD002,~MD013,~MD029,~MD033,~MD034,~MD036"]
- repo: https://github.com/codespell-project/codespell
rev: v2.1.0
hooks:
- id: codespell
- repo: https://github.com/myint/docformatter
rev: v1.3.1
hooks:
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@ Please refer to [get_started.md](docs/get_started.md#installation) for installat

Please see [train.md](docs/train.md) and [inference.md](docs/inference.md) for the basic usage of MMSegmentation.
There are also tutorials for [customizing dataset](docs/tutorials/customize_datasets.md), [designing data pipeline](docs/tutorials/data_pipeline.md), [customizing modules](docs/tutorials/customize_models.md), and [customizing runtime](docs/tutorials/customize_runtime.md).
We also provide many [training tricks](docs/tutorials/training_tricks.md) for better training and [usefule tools](docs/useful_tools.md) for deployment.
We also provide many [training tricks](docs/tutorials/training_tricks.md) for better training and [useful tools](docs/useful_tools.md) for deployment.

A Colab tutorial is also provided. You may preview the notebook [here](demo/MMSegmentation_Tutorial.ipynb) or directly [run](https://colab.research.google.com/github/open-mmlab/mmsegmentation/blob/master/demo/MMSegmentation_Tutorial.ipynb) on Colab.

Expand Down
8 changes: 4 additions & 4 deletions docs/tutorials/config.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ model = dict(
channels=512, # The intermediate channels of decode head.
pool_scales=(1, 2, 3, 6), # The avg pooling scales of PSPHead. Please refer to paper for details.
dropout_ratio=0.1, # The dropout ratio before final classification layer.
num_classes=19, # Number of segmentation classs. Usually 19 for cityscapes, 21 for VOC, 150 for ADE20k.
num_classes=19, # Number of segmentation class. Usually 19 for cityscapes, 21 for VOC, 150 for ADE20k.
norm_cfg=dict(type='SyncBN', requires_grad=True), # The configuration of norm layer.
align_corners=False, # The align_corners argument for resize in decoding.
loss_decode=dict( # Config of loss function for the decode_head.
Expand All @@ -82,7 +82,7 @@ model = dict(
num_convs=1, # Number of convs in FCNHead. It is usually 1 in auxiliary head.
concat_input=False, # Whether concat output of convs with input before classification layer.
dropout_ratio=0.1, # The dropout ratio before final classification layer.
num_classes=19, # Number of segmentation classs. Usually 19 for cityscapes, 21 for VOC, 150 for ADE20k.
num_classes=19, # Number of segmentation class. Usually 19 for cityscapes, 21 for VOC, 150 for ADE20k.
norm_cfg=dict(type='SyncBN', requires_grad=True), # The configuration of norm layer.
align_corners=False, # The align_corners argument for resize in decoding.
loss_decode=dict( # Config of loss function for the decode_head.
Expand Down Expand Up @@ -132,7 +132,7 @@ test_pipeline = [
flip=False, # Whether to flip images during testing
transforms=[
dict(type='Resize', # Use resize augmentation
keep_ratio=True), # Whether to keep the ratio between height and width, the img_scale set here will be supressed by the img_scale set above.
keep_ratio=True), # Whether to keep the ratio between height and width, the img_scale set here will be suppressed by the img_scale set above.
dict(type='RandomFlip'), # Thought RandomFlip is added in pipeline, it is not used when flip=False
dict(
type='Normalize', # Normalization config, the values are from img_norm_cfg
Expand Down Expand Up @@ -245,7 +245,7 @@ runner = dict(
checkpoint_config = dict( # Config to set the checkpoint hook, Refer to https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/checkpoint.py for implementation.
by_epoch=False, # Whether count by epoch or not.
interval=4000) # The save interval.
evaluation = dict( # The config to build the evaluation hook. Please refer to mmseg/core/evaulation/eval_hook.py for details.
evaluation = dict( # The config to build the evaluation hook. Please refer to mmseg/core/evaluation/eval_hook.py for details.
interval=4000, # The interval of evaluation.
metric='mIoU') # The evaluation metric.

Expand Down
4 changes: 2 additions & 2 deletions docs/tutorials/customize_runtime.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ Tricks not implemented by the optimizer should be implemented through optimizer
_delete_=True, grad_clip=dict(max_norm=35, norm_type=2))
```

If your config inherits the base config which already sets the `optimizer_config`, you might need `_delete_=True` to overide the unnecessary settings. See the [config documenetation](https://mmsegmentation.readthedocs.io/en/latest/config.html) for more details.
If your config inherits the base config which already sets the `optimizer_config`, you might need `_delete_=True` to override the unnecessary settings. See the [config documentation](https://mmsegmentation.readthedocs.io/en/latest/config.html) for more details.

- __Use momentum schedule to accelerate model convergence__:
We support momentum scheduler to modify model's momentum according to learning rate, which could make the model converge in a faster way.
Expand Down Expand Up @@ -198,7 +198,7 @@ custom_hooks = [

### Modify default runtime hooks

There are some common hooks that are not registerd through `custom_hooks`, they are
There are some common hooks that are not registered through `custom_hooks`, they are

- log_config
- checkpoint_config
Expand Down
2 changes: 1 addition & 1 deletion docs_zh-CN/tutorials/config.md
Original file line number Diff line number Diff line change
Expand Up @@ -241,7 +241,7 @@ runner = dict(
checkpoint_config = dict( # 设置检查点钩子 (checkpoint hook) 的配置文件。执行时请参考 https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/checkpoint.py。
by_epoch=False, # 是否按照每个 epoch 去算 runner。
interval=4000) # 保存的间隔
evaluation = dict( # 构建评估钩 (evaluation hook) 的配置文件。细节请参考 mmseg/core/evaulation/eval_hook.py。
evaluation = dict( # 构建评估钩 (evaluation hook) 的配置文件。细节请参考 mmseg/core/evaluation/eval_hook.py。
interval=4000, # 评估的间歇点
metric='mIoU') # 评估的指标

Expand Down
14 changes: 7 additions & 7 deletions mmseg/core/evaluation/metrics.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@


def f_score(precision, recall, beta=1):
"""calcuate the f-score value.
"""calculate the f-score value.
Args:
precision (float | torch.Tensor): The precision value.
Expand Down Expand Up @@ -40,7 +40,7 @@ def intersect_and_union(pred_label,
ignore_index (int): Index that will be ignored in evaluation.
label_map (dict): Mapping old labels to new labels. The parameter will
work only when label is str. Default: dict().
reduce_zero_label (bool): Wether ignore zero label. The parameter will
reduce_zero_label (bool): Whether ignore zero label. The parameter will
work only when label is str. Default: False.
Returns:
Expand Down Expand Up @@ -102,7 +102,7 @@ def total_intersect_and_union(results,
num_classes (int): Number of categories.
ignore_index (int): Index that will be ignored in evaluation.
label_map (dict): Mapping old labels to new labels. Default: dict().
reduce_zero_label (bool): Wether ignore zero label. Default: False.
reduce_zero_label (bool): Whether ignore zero label. Default: False.
Returns:
ndarray: The intersection of prediction and ground truth histogram
Expand Down Expand Up @@ -148,7 +148,7 @@ def mean_iou(results,
nan_to_num (int, optional): If specified, NaN values will be replaced
by the numbers defined by the user. Default: None.
label_map (dict): Mapping old labels to new labels. Default: dict().
reduce_zero_label (bool): Wether ignore zero label. Default: False.
reduce_zero_label (bool): Whether ignore zero label. Default: False.
Returns:
dict[str, float | ndarray]:
Expand Down Expand Up @@ -187,7 +187,7 @@ def mean_dice(results,
nan_to_num (int, optional): If specified, NaN values will be replaced
by the numbers defined by the user. Default: None.
label_map (dict): Mapping old labels to new labels. Default: dict().
reduce_zero_label (bool): Wether ignore zero label. Default: False.
reduce_zero_label (bool): Whether ignore zero label. Default: False.
Returns:
dict[str, float | ndarray]: Default metrics.
Expand Down Expand Up @@ -228,7 +228,7 @@ def mean_fscore(results,
nan_to_num (int, optional): If specified, NaN values will be replaced
by the numbers defined by the user. Default: None.
label_map (dict): Mapping old labels to new labels. Default: dict().
reduce_zero_label (bool): Wether ignore zero label. Default: False.
reduce_zero_label (bool): Whether ignore zero label. Default: False.
beta (int): Determines the weight of recall in the combined score.
Default: False.
Expand Down Expand Up @@ -274,7 +274,7 @@ def eval_metrics(results,
nan_to_num (int, optional): If specified, NaN values will be replaced
by the numbers defined by the user. Default: None.
label_map (dict): Mapping old labels to new labels. Default: dict().
reduce_zero_label (bool): Wether ignore zero label. Default: False.
reduce_zero_label (bool): Whether ignore zero label. Default: False.
Returns:
float: Overall accuracy on all images.
ndarray: Per category accuracy, shape (num_classes, ).
Expand Down
4 changes: 2 additions & 2 deletions mmseg/datasets/pipelines/__init__.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Copyright (c) OpenMMLab. All rights reserved.
from .compose import Compose
from .formating import (Collect, ImageToTensor, ToDataContainer, ToTensor,
Transpose, to_tensor)
from .formatting import (Collect, ImageToTensor, ToDataContainer, ToTensor,
Transpose, to_tensor)
from .loading import LoadAnnotations, LoadImageFromFile
from .test_time_aug import MultiScaleFlipAug
from .transforms import (CLAHE, AdjustGamma, Normalize, Pad,
Expand Down
Loading

0 comments on commit 54bd4bd

Please sign in to comment.