Skip to content

Commit

Permalink
[Docs] Reformat README.md of all algorithms (open-mmlab#663)
Browse files Browse the repository at this point in the history
  • Loading branch information
wangruohui authored Dec 18, 2021
1 parent f221edd commit 797d6e6
Show file tree
Hide file tree
Showing 28 changed files with 442 additions and 358 deletions.
22 changes: 9 additions & 13 deletions .dev_scripts/github/update_model_index.py
Original file line number Diff line number Diff line change
Expand Up @@ -144,22 +144,18 @@ def parse_md(md_file):
with open(md_file, 'r') as md:
lines = md.readlines()
i = 0
name = lines[0][2:]
name = name.split('(', 1)[0].strip()
collection['Metadata']['Architecture'].append(name)
collection['Name'] = name
collection_name = name
while i < len(lines):
# parse reference
if lines[i][:2] == '<!':
j = i + 1
while len(lines[j]) < 8 or lines[j][:8] != '<summary':
j += 1
url, name = re.findall(r'<a href="(.*)">(.*)</a>', lines[j])[0]
name = name.split('(', 1)[0].strip()
# get architecture
if 'ALGORITHM' in lines[i] or 'BACKBONE' in lines[i]:
collection['Metadata']['Architecture'].append(name)
collection['Name'] = name
collection_name = name
# get paper url
if lines[i].startswith('<!-- [PAPER_URL:'):
url = re.match(r'<!-- \[PAPER_URL: (.*?)] -->', lines[i])
url = url.groups()[0]
collection['Paper'].append(url)
i = j + 1
i += 1

# parse table
elif lines[i][0] == '|' and i + 1 < len(lines) and \
Expand Down
2 changes: 2 additions & 0 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ on:
- 'docs_zh-CN/**'
- 'examples/**'
- '.dev_scripts/**'
- '.pre-commit-config.yaml'

pull_request:
paths-ignore:
Expand All @@ -20,6 +21,7 @@ on:
- 'docs_zh-CN/**'
- 'examples/**'
- '.dev_scripts/**'
- '.pre-commit-config.yaml'

concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
Expand Down
4 changes: 4 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -49,3 +49,7 @@ repos:
language: python
files: ^configs/.*\.md$
require_serial: true
- repo: https://github.com/open-mmlab/pre-commit-hooks
rev: v0.1.0 # Use the ref you want to point at
hooks:
- id: check-algo-readme
36 changes: 17 additions & 19 deletions configs/inpainting/deepfillv1/README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,22 @@
# DeepFillv1 (CVPR'2018)

<!-- [ALGORITHM] -->
## Abstract

<details>
<summary align="right"><a href="https://arxiv.org/abs/1801.07892">DeepFillv1 (CVPR'2018)</a></summary>
<!-- [ABSTRACT] -->

Recent deep learning based approaches have shown promising results for the challenging task of inpainting large missing regions in an image. These methods can generate visually plausible image structures and textures, but often create distorted structures or blurry textures inconsistent with surrounding areas. This is mainly due to ineffectiveness of convolutional neural networks in explicitly borrowing or copying information from distant spatial locations. On the other hand, traditional texture and patch synthesis approaches are particularly suitable when it needs to borrow textures from the surrounding regions. Motivated by these observations, we propose a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions. The model is a feed-forward, fully convolutional neural network which can process images with multiple holes at arbitrary locations and with variable sizes during the test time. Experiments on multiple datasets including faces (CelebA, CelebA-HQ), textures (DTD) and natural images (ImageNet, Places2) demonstrate that our proposed approach generates higher-quality inpainting results than existing ones.

<!-- [IMAGE] -->
<p align="center">
<img src="https://user-images.githubusercontent.com/12726765/144174665-9675931f-e448-4475-a659-99b65e7d4a64.png" />
</p>

<!-- [PAPER_TITLE: Generative Image Inpainting with Contextual Attention] -->
<!-- [PAPER_URL: https://arxiv.org/abs/1801.07892] -->

## Citation

<!-- [ALGORITHM] -->

```bibtex
@inproceedings{yu2018generative,
Expand All @@ -15,22 +28,7 @@
}
```

</details>

<br/>

## Abstract

Recent deep learning based approaches have shown promising results for the challenging task of inpainting large missing regions in an image. These methods can generate visually plausible image structures and textures, but often create distorted structures or blurry textures inconsistent with surrounding areas. This is mainly due to ineffectiveness of convolutional neural networks in explicitly borrowing or copying information from distant spatial locations. On the other hand, traditional texture and patch synthesis approaches are particularly suitable when it needs to borrow textures from the surrounding regions. Motivated by these observations, we propose a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions. The model is a feed-forward, fully convolutional neural network which can process images with multiple holes at arbitrary locations and with variable sizes during the test time. Experiments on multiple datasets including faces (CelebA, CelebA-HQ), textures (DTD) and natural images (ImageNet, Places2) demonstrate that our proposed approach generates higher-quality inpainting results than existing ones.

<p align="center">
<img src="https://user-images.githubusercontent.com/12726765/144174665-9675931f-e448-4475-a659-99b65e7d4a64.png" />
</p>


## Results


## Results and models

**Places365-Challenge**

Expand Down
34 changes: 17 additions & 17 deletions configs/inpainting/deepfillv2/README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,22 @@
# DeepFillv2 (CVPR'2019)

## Abstract

<!-- [ABSTRACT] -->

We present a generative image inpainting system to complete images with free-form mask and guidance. The system is based on gated convolutions learned from millions of images without additional labelling efforts. The proposed gated convolution solves the issue of vanilla convolution that treats all input pixels as valid ones, generalizes partial convolution by providing a learnable dynamic feature selection mechanism for each channel at each spatial location across all layers. Moreover, as free-form masks may appear anywhere in images with any shape, global and local GANs designed for a single rectangular mask are not applicable. Thus, we also present a patch-based GAN loss, named SN-PatchGAN, by applying spectral-normalized discriminator on dense image patches. SN-PatchGAN is simple in formulation, fast and stable in training. Results on automatic image inpainting and user-guided extension demonstrate that our system generates higher-quality and more flexible results than previous methods. Our system helps user quickly remove distracting objects, modify image layouts, clear watermarks and edit faces.

<!-- [IMAGE] -->
<p align="center">
<img src="https://user-images.githubusercontent.com/12726765/144175160-75473789-924f-490b-ab25-4c4f252fa55f.png" />
</p>

<!-- [PAPER_TITLE: Free-Form Image Inpainting with Gated Convolution] -->
<!-- [PAPER_URL: https://arxiv.org/abs/1806.03589] -->

## Citation

<!-- [ALGORITHM] -->
<details>
<summary align="right"><a href="https://arxiv.org/abs/1806.03589">DeepFillv2 (CVPR'2019)</a></summary>

```bibtex
@inproceedings{yu2019free,
Expand All @@ -14,21 +28,7 @@
}
```

</details>

<br/>


## Abstract

We present a generative image inpainting system to complete images with free-form mask and guidance. The system is based on gated convolutions learned from millions of images without additional labelling efforts. The proposed gated convolution solves the issue of vanilla convolution that treats all input pixels as valid ones, generalizes partial convolution by providing a learnable dynamic feature selection mechanism for each channel at each spatial location across all layers. Moreover, as free-form masks may appear anywhere in images with any shape, global and local GANs designed for a single rectangular mask are not applicable. Thus, we also present a patch-based GAN loss, named SN-PatchGAN, by applying spectral-normalized discriminator on dense image patches. SN-PatchGAN is simple in formulation, fast and stable in training. Results on automatic image inpainting and user-guided extension demonstrate that our system generates higher-quality and more flexible results than previous methods. Our system helps user quickly remove distracting objects, modify image layouts, clear watermarks and edit faces.

<p align="center">
<img src="https://user-images.githubusercontent.com/12726765/144175160-75473789-924f-490b-ab25-4c4f252fa55f.png" />
</p>


## Results
## Results and models

**Places365-Challenge**

Expand Down
33 changes: 17 additions & 16 deletions configs/inpainting/global_local/README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,22 @@
# Global&Local (ToG'2017)

## Abstract

<!-- [ABSTRACT] -->

We present a novel approach for image completion that results in images that are both locally and globally consistent. With a fully-convolutional neural network, we can complete images of arbitrary resolutions by flling in missing regions of any shape. To train this image completion network to be consistent, we use global and local context discriminators that are trained to distinguish real images from completed ones. The global discriminator looks at the entire image to assess if it is coherent as a whole, while the local discriminator looks only at a small area centered at the completed region to ensure the local consistency of the generated patches. The image completion network is then trained to fool the both context discriminator networks, which requires it to generate images that are indistinguishable from real ones with regard to overall consistency as well as in details. We show that our approach can be used to complete a wide variety of scenes. Furthermore, in contrast with the patch-based approaches such as PatchMatch, our approach can generate fragments that do not appear elsewhere in the image, which allows us to naturally complete the image.

<!-- [IMAGE] -->
<p align="center">
<img src="https://user-images.githubusercontent.com/12726765/144175196-51dfda11-f7e1-4c7e-abed-42799f757bef.png" />
</p>

<!-- [PAPER_TITLE: Globally and Locally Consistent Image Completion] -->
<!-- [PAPER_URL: http://iizuka.cs.tsukuba.ac.jp/projects/completion/data/completion_sig2017.pdf] -->

## Citation

<!-- [ALGORITHM] -->
<details>
<summary align="right"><a href="http://iizuka.cs.tsukuba.ac.jp/projects/completion/data/completion_sig2017.pdf">Global&Local (ToG'2017)</a></summary>

```bibtex
@article{iizuka2017globally,
Expand All @@ -17,20 +31,7 @@
}
```

</details>

<br/>

## Abstract

We present a novel approach for image completion that results in images that are both locally and globally consistent. With a fully-convolutional neural network, we can complete images of arbitrary resolutions by flling in missing regions of any shape. To train this image completion network to be consistent, we use global and local context discriminators that are trained to distinguish real images from completed ones. The global discriminator looks at the entire image to assess if it is coherent as a whole, while the local discriminator looks only at a small area centered at the completed region to ensure the local consistency of the generated patches. The image completion network is then trained to fool the both context discriminator networks, which requires it to generate images that are indistinguishable from real ones with regard to overall consistency as well as in details. We show that our approach can be used to complete a wide variety of scenes. Furthermore, in contrast with the patch-based approaches such as PatchMatch, our approach can generate fragments that do not appear elsewhere in the image, which allows us to naturally complete the image.

<p align="center">
<img src="https://user-images.githubusercontent.com/12726765/144175196-51dfda11-f7e1-4c7e-abed-42799f757bef.png" />
</p>


## Results
## Results and models

*Note that we do not apply the post-processing module in Global&Local for a fair comparison with current deep inpainting methods.*

Expand Down
35 changes: 17 additions & 18 deletions configs/inpainting/partial_conv/README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,22 @@
# PConv (ECCV'2018)

<!-- [ALGORITHM] -->
## Abstract

<!-- [ABSTRACT] -->

Existing deep learning based image inpainting methods use a standard convolutional network over the corrupted image, using convolutional filter responses conditioned on both valid pixels as well as the substitute values in the masked holes (typically the mean value). This often leads to artifacts such as color discrepancy and blurriness. Post-processing is usually used to reduce such artifacts, but are expensive and may fail. We propose the use of partial convolutions, where the convolution is masked and renormalized to be conditioned on only valid pixels. We further include a mechanism to automatically generate an updated mask for the next layer as part of the forward pass. Our model outperforms other methods for irregular masks. We show qualitative and quantitative comparisons with other methods to validate our approach.

<!-- [IMAGE] -->
<p align="center">
<img src="https://user-images.githubusercontent.com/12726765/144175613-1bc9ad1b-072d-4c1f-a97d-1af5be2590bd.png" />
</p>

<details>
<summary align="right"><a href="https://arxiv.org/abs/1804.07723">PConv (ECCV'2018)</a></summary>
<!-- [PAPER_TITLE: Image Inpainting for Irregular Holes Using Partial Convolutions] -->
<!-- [PAPER_URL: https://arxiv.org/abs/1804.07723] -->

## Citation

<!-- [ALGORITHM] -->

```bibtex
@inproceedings{liu2018image,
Expand All @@ -15,21 +28,7 @@
}
```

</details>

<br/>

## Abstract

Existing deep learning based image inpainting methods use a standard convolutional network over the corrupted image, using convolutional filter responses conditioned on both valid pixels as well as the substitute values in the masked holes (typically the mean value). This often leads to artifacts such as color discrepancy and blurriness. Post-processing is usually used to reduce such artifacts, but are expensive and may fail. We propose the use of partial convolutions, where the convolution is masked and renormalized to be conditioned on only valid pixels. We further include a mechanism to automatically generate an updated mask for the next layer as part of the forward pass. Our model outperforms other methods for irregular masks. We show qualitative and quantitative comparisons with other methods to validate our approach.

<p align="center">
<img src="https://user-images.githubusercontent.com/12726765/144175613-1bc9ad1b-072d-4c1f-a97d-1af5be2590bd.png" />
</p>


## Results

## Results and models

**Places365-Challenge**

Expand Down
34 changes: 17 additions & 17 deletions configs/mattors/dim/README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,22 @@
# DIM (CVPR'2017)

## Abstract

<!-- [ABSTRACT] -->

Image matting is a fundamental computer vision problem and has many applications. Previous algorithms have poor performance when an image has similar foreground and background colors or complicated textures. The main reasons are prior methods 1) only use low-level features and 2) lack high-level context. In this paper, we propose a novel deep learning based algorithm that can tackle both these problems. Our deep model has two parts. The first part is a deep convolutional encoder-decoder network that takes an image and the corresponding trimap as inputs and predict the alpha matte of the image. The second part is a small convolutional network that refines the alpha matte predictions of the first network to have more accurate alpha values and sharper edges. In addition, we also create a large-scale image matting dataset including 49300 training images and 1000 testing images. We evaluate our algorithm on the image matting benchmark, our testing set, and a wide variety of real images. Experimental results clearly demonstrate the superiority of our algorithm over previous methods.

<!-- [IMAGE] -->
<p align="center">
<img src="https://user-images.githubusercontent.com/12726765/144175771-05b4d8f5-1abc-48ee-a5f1-8cc89a156e27.png" />
</p>

<!-- [PAPER_TITLE: Deep Image Matting] -->
<!-- [PAPER_URL: https://arxiv.org/abs/1703.03872] -->

## Citation

<!-- [ALGORITHM] -->
<details>
<summary align="right"><a href="https://arxiv.org/abs/1703.03872">DIM (CVPR'2017)</a></summary>

```bibtex
@inproceedings{xu2017deep,
Expand All @@ -14,21 +28,7 @@
}
```

</details>

<br/>

## Abstract

Image matting is a fundamental computer vision problem and has many applications. Previous algorithms have poor performance when an image has similar foreground and background colors or complicated textures. The main reasons are prior methods 1) only use low-level features and 2) lack high-level context. In this paper, we propose a novel deep learning based algorithm that can tackle both these problems. Our deep model has two parts. The first part is a deep convolutional encoder-decoder network that takes an image and the corresponding trimap as inputs and predict the alpha matte of the image. The second part is a small convolutional network that refines the alpha matte predictions of the first network to have more accurate alpha values and sharper edges. In addition, we also create a large-scale image matting dataset including 49300 training images and 1000 testing images. We evaluate our algorithm on the image matting benchmark, our testing set, and a wide variety of real images. Experimental results clearly demonstrate the superiority of our algorithm over previous methods.

<p align="center">
<img src="https://user-images.githubusercontent.com/12726765/144175771-05b4d8f5-1abc-48ee-a5f1-8cc89a156e27.png" />
</p>

## Results


## Results and models

| Method | SAD | MSE | GRAD | CONN | Download |
| :------------------------------------------------------------------------: | :------: | :-------: | :------: | :------: | :-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
Expand Down
Loading

0 comments on commit 797d6e6

Please sign in to comment.