Skip to content

Commit

Permalink
[Improvement] Lint markdown files (open-mmlab#225)
Browse files Browse the repository at this point in the history
* linting

* polish

* polish

* polish

* polish

* polish

* polish

* update changelog
  • Loading branch information
dreamerlin authored Dec 18, 2020
1 parent f4b165b commit 4df447b
Show file tree
Hide file tree
Showing 48 changed files with 465 additions and 195 deletions.
7 changes: 6 additions & 1 deletion .github/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,16 +13,20 @@ All kinds of contributions are welcome, including but not limited to the followi
4. create a PR

Note

- If you plan to add some new features that involve large changes, it is encouraged to open an issue for discussion first.
- If you are the author of some papers and would like to include your method to mmaction2,

please contact Kai Chen (chenkaidev@gmail.com). We will much appreciate your contribution.

## Code style

### Python

We adopt [PEP8](https://www.python.org/dev/peps/pep-0008/) as the preferred code style.

We use the following tools for linting and formatting:

- [flake8](http://flake8.pycqa.org/en/latest/): linter
- [yapf](https://github.com/google/yapf): formatter
- [isort](https://github.com/timothycrosley/isort): sort imports
Expand All @@ -40,14 +44,15 @@ pip install -U pre-commit
```

From the repository folder

```
pre-commit install
```

After this on every commit check code linters and formatter will be enforced.


>Before you create a PR, make sure that your code lints and is formatted by yapf.
### C++ and CUDA

We follow the [Google C++ Style Guide](https://google.github.io/styleguide/cppguide.html).
7 changes: 7 additions & 0 deletions .github/ISSUE_TEMPLATE/error-report.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,17 +11,21 @@ Thanks for your error report and we appreciate it a lot.
If you feel we have help you, give us a STAR! :satisfied:

**Checklist**

1. I have searched related issues but cannot get the expected help.
2. The bug has not been fixed in the latest version.

**Describe the bug**
A clear and concise description of what the bug is.

**Reproduction**

1. What command or script did you run?

```
A placeholder for the command.
```

2. Did you make any modifications on the code or config? Did you understand what you have modified?
3. What dataset did you use?

Expand All @@ -33,10 +37,13 @@ A placeholder for the command.
- Other environment variables that may be related (such as `$PATH`, `$LD_LIBRARY_PATH`, `$PYTHONPATH`, etc.)

**Error traceback**

If applicable, paste the error traceback here.

```
A placeholder for traceback.
```

**Bug fix**

If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!
3 changes: 3 additions & 0 deletions .github/ISSUE_TEMPLATE/feature_request.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,13 +13,16 @@ If you feel we have help you, give us a STAR! :satisfied:
**Describe the feature**

**Motivation**

A clear and concise description of the motivation of the feature.
Ex1. It is inconvenient when [....].
Ex2. There is a recent paper [....], which is very helpful for [....].

**Related resources**

If there is an official code released or third-party implementations, please also provide the information here, which would be very helpful.

**Additional context**

Add any other context or screenshots about the feature request here.
If you would like to implement the feature and create a PR, please leave a comment here and that would be much appreciated.
13 changes: 11 additions & 2 deletions .github/ISSUE_TEMPLATE/reimplementation_questions.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,17 +12,20 @@ If you feel we have help you, give us a STAR! :satisfied:
**Notice**

There are several common situations in the reimplementation issues as below

1. Reimplement a model in the model zoo using the provided configs
2. Reimplement a model in the model zoo on other dataset (e.g., custom datasets)
3. Reimplement a custom model but all the components are implemented in MMAction2
4. Reimplement a custom model with new modules implemented by yourself

There are several things to do for different cases as below.

- For case 1 & 3, please follow the steps in the following sections thus we could help to quick identify the issue.
- For case 2 & 4, please understand that we are not able to do much help here because we usually do not know the full code and the users should be responsible to the code they write.
- One suggestion for case 2 & 4 is that the users should first check whether the bug lies in the self-implemented code or the original code. For example, users can first make sure that the same model runs well on supported datasets. If you still need help, please describe what you have done and what you obtain in the issue, and follow the steps in the following sections and try as clear as possible so that we can better help you.

**Checklist**

1. I have searched related issues but cannot get the expected help.
2. The issue has not been fixed in the latest version.

Expand All @@ -31,27 +34,33 @@ There are several things to do for different cases as below.
A clear and concise description of what the problem you meet and what have you done.

**Reproduction**

1. What command or script did you run?

```
A placeholder for the command.
```

2. What config dir you run?

```
A placeholder for the config.
```

3. Did you make any modifications on the code or config? Did you understand what you have modified?
4. What dataset did you use?

**Environment**

1. Please run `PYTHONPATH=${PWD}:$PYTHONPATH python mmaction/utils/collect_env.py` to collect necessary environment information and paste it here.
2. You may add addition that may be helpful for locating the problem, such as
- How you installed PyTorch [e.g., pip, conda, source]
- Other environment variables that may be related (such as `$PATH`, `$LD_LIBRARY_PATH`, `$PYTHONPATH`, etc.)
1. How you installed PyTorch [e.g., pip, conda, source]
2. Other environment variables that may be related (such as `$PATH`, `$LD_LIBRARY_PATH`, `$PYTHONPATH`, etc.)

**Results**

If applicable, paste the related results here, e.g., what you expect and what you get.

```
A placeholder for results comparison
```
Expand Down
6 changes: 6 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
exclude: ^tests/data/
repos:
- repo: https://gitlab.com/pycqa/flake8
rev: 3.8.3
Expand Down Expand Up @@ -28,6 +29,11 @@ repos:
args: ["--remove"]
- id: mixed-line-ending
args: ["--fix=lf"]
- repo: https://github.com/jumanjihouse/pre-commit-hooks
rev: 2.1.4
hooks:
- id: markdownlint
args: [ "-r", "~MD002,~MD013,~MD024,~MD029,~MD033,~MD034,~MD036" ]
- repo: https://github.com/myint/docformatter
rev: v1.3.1
hooks:
Expand Down
5 changes: 4 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,6 @@
[![Average time to resolve an issue](https://isitmaintained.com/badge/resolution/open-mmlab/mmaction2.svg)](https://github.com/open-mmlab/mmaction2/issues)
[![Percentage of issues still open](https://isitmaintained.com/badge/open/open-mmlab/mmaction2.svg)](https://github.com/open-mmlab/mmaction2/issues)


MMAction2 is an open-source toolbox for action understanding based on PyTorch.
It is a part of the [OpenMMLab](http://openmmlab.org/) project.

Expand Down Expand Up @@ -54,6 +53,7 @@ This project is released under the [Apache 2.0 license](LICENSE).
v0.9.0 was released in 30/11/2020. Please refer to [changelog.md](docs/changelog.md) for details and release history.

## Benchmark

| Model |input| io backend | batch size x gpus | MMAction2 (s/iter) | MMAction (s/iter) | Temporal-Shift-Module (s/iter) | PySlowFast (s/iter) |
| :--- | :---------------:|:---------------:| :---------------:| :---------------: | :--------------------: | :----------------------------: | :-----------------: |
| [TSN](/configs/recognition/tsn/tsn_r50_1x1x3_100e_kinetics400_rgb.py)| 256p rawframes |Memcached| 32x8|**[0.32](https://download.openmmlab.com/mmaction/benchmark/recognition/mmaction2/tsn_256p_rawframes_memcahed_32x8.zip)** | [0.38](https://download.openmmlab.com/mmaction/benchmark/recognition/mmaction/tsn_256p_rawframes_memcached_32x8.zip)| [0.42](https://download.openmmlab.com/mmaction/benchmark/recognition/temporal_shift_module/tsn_256p_rawframes_memcached_32x8.zip)| x |
Expand All @@ -68,7 +68,9 @@ v0.9.0 was released in 30/11/2020. Please refer to [changelog.md](docs/changelog
Details can be found in [benchmark](docs/benchmark.md).

## ModelZoo

Supported methods for action recognition:

- [x] [TSN](configs/recognition/tsn/README.md)
- [x] [TSM](configs/recognition/tsm/README.md)
- [x] [TSM Non-Local](configs/recognition/i3d)
Expand All @@ -86,6 +88,7 @@ Supported methods for action recognition:
- [x] [MultiModality: Audio](configs/recognition_audio/resnet/README.md)

Supported methods for action localization:

- [x] [BMN](configs/localization/bmn/README.md)
- [x] [BSN](configs/localization/bsn/README.md)
- [x] [SSN](configs/localization/ssn/README.md)
Expand Down
11 changes: 9 additions & 2 deletions configs/detection/ava/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,30 +43,37 @@
- Notes:

1. The **gpus** indicates the number of gpu we used to get the checkpoint.
According to the [Linear Scaling Rule](https://arxiv.org/abs/1706.02677), you may set the learning rate proportional to the batch size if you use different GPUs or videos per GPU,
e.g., lr=0.01 for 4 GPUs * 2 video/gpu and lr=0.08 for 16 GPUs * 4 video/gpu.
According to the [Linear Scaling Rule](https://arxiv.org/abs/1706.02677), you may set the learning rate proportional to the batch size if you use different GPUs or videos per GPU,
e.g., lr=0.01 for 4 GPUs x 2 video/gpu and lr=0.08 for 16 GPUs x 4 video/gpu.

For more details on data preparation, you can refer to AVA in [Data Preparation](/docs/data_preparation.md).

## Train

You can use the following command to train a model.

```shell
python tools/train.py ${CONFIG_FILE} [optional arguments]
```

Example: train SlowOnly model on AVA with periodic validation.

```shell
python tools/train.py configs/detection/AVA/slowonly_kinetics_pretrained_r50_8x8x1_20e_ava_rgb.py --validate
```

For more details and optional arguments infos, you can refer to **Training setting** part in [getting_started](/docs/getting_started.md#training-setting) .

## Test

You can use the following command to test a model.

```shell
python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments]
```

Example: test SlowOnly model on AVA and dump the result to a csv file.

```shell
python tools/test.py configs/detection/AVA/slowonly_kinetics_pretrained_r50_8x8x1_20e_ava_rgb.py checkpoints/SOME_CHECKPOINT.pth --eval bbox --out results.csv
```
Expand Down
16 changes: 14 additions & 2 deletions configs/localization/bmn/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# BMN

## Introduction

```
@inproceedings{lin2019bmn,
title={Bmn: Boundary-matching network for temporal action proposal generation},
Expand Down Expand Up @@ -33,8 +34,8 @@
- Notes:

1. The **gpus** indicates the number of gpu we used to get the checkpoint.
According to the [Linear Scaling Rule](https://arxiv.org/abs/1706.02677), you may set the learning rate proportional to the batch size if you use different GPUs or videos per GPU,
e.g., lr=0.01 for 4 GPUs * 2 video/gpu and lr=0.08 for 16 GPUs * 4 video/gpu.
According to the [Linear Scaling Rule](https://arxiv.org/abs/1706.02677), you may set the learning rate proportional to the batch size if you use different GPUs or videos per GPU,
e.g., lr=0.01 for 4 GPUs x 2 video/gpu and lr=0.08 for 16 GPUs x 4 video/gpu.
2. For feature column, cuhk_mean_100 denotes the widely used cuhk activitynet feature extracted by [anet2016-cuhk](https://github.com/yjxiong/anet2016-cuhk), mmaction_video and mmaction_clip denote feature extracted by mmaction, with video-level activitynet finetuned model or clip-level activitynet finetuned model respectively.
3. We evaluate the action detection performance of BMN, using [anet_cuhk_2017](https://download.openmmlab.com/mmaction/localization/cuhk_anet17_pred.json) submission for ActivityNet2017 Untrimmed Video Classification Track to assign label for each action proposal.

Expand All @@ -43,35 +44,46 @@ e.g., lr=0.01 for 4 GPUs * 2 video/gpu and lr=0.08 for 16 GPUs * 4 video/gpu.
For more details on data preparation, you can refer to ActivityNet feature in [Data Preparation](/docs/data_preparation.md).

## Train

You can use the following command to train a model.

```shell
python tools/train.py ${CONFIG_FILE} [optional arguments]
```

Example: train BMN model on ActivityNet features dataset.

```shell
python tools/train.py configs/localization/bmn/bmn_400x100_2x8_9e_activitynet_feature.py
```

For more details and optional arguments infos, you can refer to **Training setting** part in [getting_started](/docs/getting_started.md#training-setting) .

## Test

You can use the following command to test a model.

```shell
python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments]
```

Example: test BMN on ActivityNet feature dataset.

```shell
# Note: If evaluated, then please make sure the annotation file for test data contains groundtruth.
python tools/test.py configs/localization/bmn/bmn_400x100_2x8_9e_activitynet_feature.py checkpoints/SOME_CHECKPOINT.pth --eval AR@AN --out results.json
```

You can also test the action detection performance of the model, with [anet_cuhk_2017](https://download.openmmlab.com/mmaction/localization/cuhk_anet17_pred.json) prediction file and generated proposal file (`results.json` in last command).

```shell
python tools/analysis/report_map.py --proposal path/to/proposal_file
```

Notes:

1. (Optional) You can use the following command to generate a formatted proposal file, which will be fed into the action classifier (Currently supports SSN and P-GCN, not including TSN, I3D etc.) to get the classification result of proposals.

```shell
python tools/data/activitynet/convert_proposal_format.py
```
Expand Down
Loading

0 comments on commit 4df447b

Please sign in to comment.