Skip to content

Commit

Permalink
Merge branch 'master' into auto-resume
Browse files Browse the repository at this point in the history
  • Loading branch information
pppppM authored Mar 8, 2022
2 parents 0e32971 + 366fd0f commit 9b0be7a
Show file tree
Hide file tree
Showing 62 changed files with 1,432 additions and 200 deletions.
149 changes: 100 additions & 49 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,72 +3,76 @@

name: build

on: [push, pull_request]
on:
push:
paths-ignore:
- "README.md"
- "README_zh-CN.md"
- "model-index.yml"
- "configs/**"
- "docs/**"
- ".dev_scripts/**"

jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python 3.7
uses: actions/setup-python@v1
with:
python-version: 3.7
- name: Install pre-commit hook
run: |
pip install pre-commit
pre-commit install
- name: Linting
run: pre-commit run --all-files
pull_request:
paths-ignore:
- "README.md"
- "README_zh-CN.md"
- "model-index.yml"
- "configs/**"
- "docs/**"
- ".dev_scripts/**"

build:
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true

jobs:
test_linux:
runs-on: ubuntu-latest
env:
UBUNTU_VERSION: ubuntu1804
strategy:
matrix:
python-version: [3.7]
torch: [1.5.0, 1.6.0, 1.7.0, 1.8.0, 1.9.0]
torch:
[1.5.1+cpu, 1.6.0+cpu, 1.7.1+cpu, 1.8.0+cpu, 1.9.0+cpu, 1.10.1+cpu]
include:
- torch: 1.5.0
torchvision: 0.6.0
- torch: 1.6.0
torchvision: 0.7.0
- torch: 1.7.0
torchvision: 0.8.1
- torch: 1.8.0
torchvision: 0.9.0
- torch: 1.8.0
torchvision: 0.9.0
- torch: 1.5.1+cpu
torchvision: 0.6.1+cpu
mmcv_link: cpu/torch1.5
python-version: 3.7
- torch: 1.6.0+cpu
torchvision: 0.7.0+cpu
mmcv_link: cpu/torch1.6
python-version: 3.7
- torch: 1.7.1+cpu
torchvision: 0.8.2+cpu
mmcv_link: cpu/torch1.7
python-version: 3.7
- torch: 1.8.0+cpu
torchvision: 0.9.0+cpu
mmcv_link: cpu/torch1.8
python-version: 3.8
- torch: 1.8.0
torchvision: 0.9.0
python-version: 3.9
- torch: 1.9.0
torchvision: 0.10.0
- torch: 1.9.0
torchvision: 0.10.0
- torch: 1.9.0+cpu
torchvision: 0.10.0+cpu
mmcv_link: cpu/torch1.9
python-version: 3.8
- torch: 1.9.0
torchvision: 0.10.0
- torch: 1.10.1+cpu
torchvision: 0.11.2+cpu
mmcv_link: cpu/torch1.10
python-version: 3.9

steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install Pillow
run: pip install Pillow==6.2.2
if: ${{matrix.torchvision < 0.5}}
- name: Upgrade pip
run: pip install pip --upgrade
- name: Install PyTorch
run: pip install --use-deprecated=legacy-resolver torch==${{matrix.torch}}+cpu torchvision==${{matrix.torchvision}}+cpu -f https://download.pytorch.org/whl/torch_stable.html
run: pip install torch==${{matrix.torch}} torchvision==${{matrix.torchvision}} -f https://download.pytorch.org/whl/torch_stable.html
- name: Install MMCV
run: |
pip install --use-deprecated=legacy-resolver mmcv-full -f https://download.openmmlab.com/mmcv/dist/cpu/torch${{matrix.torch}}/index.html
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/${{matrix.mmcv_link}}/index.html
python -c 'import mmcv; print(mmcv.__version__)'
- name: Install mmcls dependencies
- name: Install unittest dependencies
run: |
pip install -r requirements.txt
- name: Install timm
Expand All @@ -85,8 +89,55 @@ jobs:
coverage report -m --omit="mmrazor/apis/*"
# Only upload coverage report for python3.7 && pytorch1.5
- name: Upload coverage to Codecov
if: ${{matrix.torch == '1.5.0' && matrix.python-version == '3.7'}}
uses: codecov/codecov-action@v1.0.10
if: ${{matrix.torch == '1.5.1+cpu' && matrix.python-version == '3.7'}}
uses: codecov/codecov-action@v2
with:
file: ./coverage.xml
flags: unittests
env_vars: OS,PYTHON
name: codecov-umbrella
fail_ci_if_error: false

test_windows:
runs-on: windows-latest
strategy:
matrix:
torch: [1.8.2+cpu]
torchvision: [0.9.2+cpu]
mmcv_link: [cpu/torch1.8]
python-version: [3.8]
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Upgrade pip
run: pip install pip --upgrade --user
- name: Install PyTorch
run: pip install torch==${{matrix.torch}} torchvision==${{matrix.torchvision}} -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
- name: Install OpenCV
run: |
pip install opencv-python>=3
- name: Install MMCV
run: |
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cpu/torch1.8/index.html --only-binary mmcv-full
- name: Install unittest dependencies
run: |
pip install -r requirements.txt
- name: Install timm
run: |
pip install timm
- name: Build and install
run: |
pip install -e . -U
- name: Run unittests and generate coverage report
run: |
coverage run --branch --source mmrazor -m pytest tests/
coverage xml
coverage report -m --omit="mmrazor/apis/*"
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v2
with:
file: ./coverage.xml
flags: unittests
Expand Down
27 changes: 27 additions & 0 deletions .github/workflows/lint.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
name: lint

on: [push, pull_request]

concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true

jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python 3.7
uses: actions/setup-python@v2
with:
python-version: 3.7
- name: Install pre-commit hook
run: |
pip install pre-commit
pre-commit install
- name: Linting
run: pre-commit run --all-files
- name: Check docstring coverage
run: |
pip install interrogate
interrogate -v --ignore-init-method --ignore-module --ignore-nested-functions --ignore-regex "__repr__" --fail-under 80 mmrazor
26 changes: 10 additions & 16 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,8 @@ repos:
rev: 3.8.3
hooks:
- id: flake8
- repo: https://github.com/asottile/seed-isort-config
rev: v2.2.0
hooks:
- id: seed-isort-config
- repo: https://github.com/timothycrosley/isort
rev: 4.3.21
- repo: https://github.com/PyCQA/isort
rev: 5.10.1
hooks:
- id: isort
- repo: https://github.com/pre-commit/mirrors-yapf
Expand All @@ -29,8 +25,8 @@ repos:
args: ["--remove"]
- id: mixed-line-ending
args: ["--fix=lf"]
- repo: https://github.com/jumanjihouse/pre-commit-hooks
rev: 2.1.4
- repo: https://github.com/markdownlint/markdownlint
rev: v0.11.0
hooks:
- id: markdownlint
args: ["-r", "~MD002,~MD013,~MD029,~MD033,~MD034",
Expand All @@ -44,11 +40,9 @@ repos:
hooks:
- id: docformatter
args: ["--in-place", "--wrap-descriptions", "79"]
# - repo: local
# hooks:
# - id: clang-format
# name: clang-format
# description: Format files with ClangFormat
# entry: clang-format -style=google -i
# language: system
# files: \.(c|cc|cxx|cpp|cu|h|hpp|hxx|cuh|proto)$
- repo: https://github.com/open-mmlab/pre-commit-hooks
rev: v0.2.0
hooks:
- id: check-algo-readme
- id: check-copyright
args: [ "mmrazor", "tests", "tools"]
25 changes: 13 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -117,20 +117,21 @@ We wish that the toolbox and benchmark could serve the growing research communit
## Projects in OpenMMLab

- [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab foundational library for computer vision.
- [MIM](https://github.com/open-mmlab/mim): MIM Installs OpenMMLab Packages.
- [MIM](https://github.com/open-mmlab/mim): MIM installs OpenMMLab packages.
- [MMClassification](https://github.com/open-mmlab/mmclassification): OpenMMLab image classification toolbox and benchmark.
- [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab detection toolbox and benchmark.
- [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab next-generation platform for general 3D object detection.
- [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab's next-generation platform for general 3D object detection.
- [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab rotated object detection toolbox and benchmark.
- [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab semantic segmentation toolbox and benchmark.
- [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab next-generation action understanding toolbox and benchmark.
- [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab video perception toolbox and benchmark.
- [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab text detection, recognition, and understanding toolbox.
- [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab pose estimation toolbox and benchmark.
- [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 3D human parametric model toolbox and benchmark.
- [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab self-supervised learning toolbox and benchmark.
- [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab model compression toolbox and benchmark.
- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab fewshot learning toolbox and benchmark.
- [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab's next-generation action understanding toolbox and benchmark.
- [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab video perception toolbox and benchmark.
- [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab optical flow toolbox and benchmark.
- [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab image and video editing toolbox.
- [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab toolbox for text detection, recognition and understanding.
- [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMlab toolkit for generative models.
- [MMFlow](https://github.com/open-mmlab/mmflow) OpenMMLab optical flow toolbox and benchmark.
- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab FewShot Learning Toolbox and Benchmark.
- [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 3D Human Parametric Model Toolbox and Benchmark.
- [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab self-supervised learning Toolbox and Benchmark.
- [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab Model Compression Toolbox and Benchmark.
- [MMDeploy](https://github.com/open-mmlab/mmdeploy): OpenMMLab Model Deployment Framework.
- [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab image and video generative models toolbox.
- [MMDeploy](https://github.com/open-mmlab/mmdeploy): OpenMMLab model deployment framework.
16 changes: 8 additions & 8 deletions README_zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -146,20 +146,20 @@ MMRazor 是一款由来自不同高校和企业的研发人员共同参与贡献
- [MMClassification](https://github.com/open-mmlab/mmclassification): OpenMMLab 图像分类工具箱
- [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab 目标检测工具箱
- [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab 新一代通用 3D 目标检测平台
- [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab 旋转框检测工具箱与测试基准
- [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab 语义分割工具箱
- [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab 新一代视频理解工具箱
- [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab 一体化视频目标感知平台
- [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab 全流程文字检测识别理解工具箱
- [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab 姿态估计工具箱
- [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab 图像视频编辑工具箱
- [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab 全流程文字检测识别理解工具包
- [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab 图片视频生成模型工具箱
- [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab 光流估计工具箱与测试基准
- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab 少样本学习工具箱与测试基准
- [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 人体参数化模型工具箱与测试基准
- [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab 自监督学习工具箱与测试基准
- [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab 模型压缩工具箱与测试基准
- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab 少样本学习工具箱与测试基准
- [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab 新一代视频理解工具箱
- [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab 一体化视频目标感知平台
- [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab 光流估计工具箱与测试基准
- [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab 图像视频编辑工具箱
- [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab 图片视频生成模型工具箱
- [MMDeploy](https://github.com/open-mmlab/mmdeploy): OpenMMLab 模型部署框架

## 欢迎加入 OpenMMLab 社区

扫描下方的二维码可关注 OpenMMLab 团队的 [知乎官方账号](https://www.zhihu.com/people/openmmlab),加入 OpenMMLab 团队的 [官方交流 QQ 群](https://jq.qq.com/?_wv=1027&k=aCvMxdr3)
Expand Down
28 changes: 17 additions & 11 deletions configs/distill/cwd/README.md
Original file line number Diff line number Diff line change
@@ -1,22 +1,15 @@
# CHANNEL-WISE KNOWLEDGE DISTILLATION FOR DENSE PREDICTION
# CWD
> [Channel-wise Knowledge Distillation for Dense Prediction](https://arxiv.org/abs/2011.13256)
<!-- [ALGORITHM] -->
## Abstract

Knowledge distillation (KD) has been proven to be a simple and effective tool for training compact models. Almost all KD variants for dense prediction tasks align the student and teacher networks' feature maps in the spatial domain, typically by minimizing point-wise and/or pair-wise discrepancy. Observing that in semantic segmentation, some layers' feature activations of each channel tend to encode saliency of scene categories (analogue to class activation mapping), we propose to align features channel-wise between the student and teacher networks. To this end, we first transform the feature map of each channel into a probability map using softmax normalization, and then minimize the Kullback-Leibler (KL) divergence of the corresponding channels of the two networks. By doing so, our method focuses on mimicking the soft distributions of channels between networks. In particular, the KL divergence enables learning to pay more attention to the most salient regions of the channel-wise maps, presumably corresponding to the most useful signals for semantic segmentation. Experiments demonstrate that our channel-wise distillation outperforms almost all existing spatial distillation methods for semantic segmentation considerably, and requires less computational cost during training. We consistently achieve superior performance on three benchmarks with various network structures.


![pipeline](/docs/en/imgs/model_zoo/cwd/pipeline.png)

## Citation

```latex
@inproceedings{shu2021channel,
title={Channel-Wise Knowledge Distillation for Dense Prediction},
author={Shu, Changyong and Liu, Yifan and Gao, Jianfei and Yan, Zheng and Shen, Chunhua},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={5311--5320},
year={2021}
}
```

## Results and models
### Segmentation
Expand All @@ -28,3 +21,16 @@ Knowledge distillation (KD) has been proven to be a simple and effective tool fo
|Location|Dataset|Teacher|Student|mAP|mAP(T)|mAP(S)|Config | Download |
:--------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:------:|:---------|
| cls head |COCO|[gfl_r101_2x](https://github.com/open-mmlab/mmdetection/tree/master/configs/gfl/gfl_r101_fpn_mstrain_2x_coco.py)|[gfl_r50_1x](https://github.com/open-mmlab/mmdetection/tree/master/configs/gfl/gfl_r50_fpn_1x_coco.py)| 41.9 | 44.7 | 40.2 |[config]()|[teacher](https://download.openmmlab.com/mmdetection/v2.0/gfl/gfl_r101_fpn_mstrain_2x_coco/gfl_r101_fpn_mstrain_2x_coco_20200629_200126-dd12f847.pth) &#124;[model](https://openmmlab-share.oss-cn-hangzhou.aliyuncs.com/mmrazor/v0.1/distill/cwd/cwd_cls_head_gfl_r101_fpn_gfl_r50_fpn_1x_coco/cwd_cls_head_gfl_r101_fpn_gfl_r50_fpn_1x_coco_20211222-655dff39.pth?versionId=CAEQHxiBgMD7.uuI7xciIDY1MDRjYzlkN2ExOTRiY2NhNmU4NGJlMmExNjA2YzMy) &#124; [log](https://openmmlab-share.oss-cn-hangzhou.aliyuncs.com/mmrazor/v0.1/distill/cwd/cwd_cls_head_gfl_r101_fpn_gfl_r50_fpn_1x_coco/cwd_cls_head_gfl_r101_fpn_gfl_r50_fpn_1x_coco_20211212_205444.log.json?versionId=CAEQHxiBgID.o_WI7xciIDgyZjRjYTU4Y2ZjNjRjOGU5MTBlMTQ3ZjEyMTE4OTJl)|


## Citation

```latex
@inproceedings{shu2021channel,
title={Channel-Wise Knowledge Distillation for Dense Prediction},
author={Shu, Changyong and Liu, Yifan and Gao, Jianfei and Yan, Zheng and Shen, Chunhua},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={5311--5320},
year={2021}
}
```
Loading

0 comments on commit 9b0be7a

Please sign in to comment.