Skip to content

Commit

Permalink
[Docs] Add docs and README for MinkUnet (#2358)
Browse files Browse the repository at this point in the history
* add readme

* rename

* fix miou typo

* add link

* fix backbone name

* add torchsparse link

* revise link
  • Loading branch information
sunjiahao1999 authored Mar 29, 2023
1 parent 20987e5 commit b481efc
Show file tree
Hide file tree
Showing 5 changed files with 159 additions and 52 deletions.
55 changes: 29 additions & 26 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -134,6 +134,7 @@ Results and models are available in the [model zoo](docs/en/model_zoo.md).
<li><a href="configs/dgcnn">DGCNN (TOG'2019)</a></li>
<li>DLA (CVPR'2018)</li>
<li>MinkResNet (CVPR'2019)</li>
<li><a href="configs/minkunet">MinkUNet (CVPR'2019)</a></li>
<li><a href="configs/cylinder3d">Cylinder3D (CVPR'2021)</a></li>
</ul>
</td>
Expand Down Expand Up @@ -221,6 +222,7 @@ Results and models are available in the [model zoo](docs/en/model_zoo.md).
<td>
<li><b>Outdoor</b></li>
<ul>
<li><a href="configs/minkunet">MinkUNet (CVPR'2019)</a></li>
<li><a href="configs/cylinder3d">Cylinder3D (CVPR'2021)</a></li>
</ul>
<li><b>Indoor</b></li>
Expand All @@ -237,32 +239,33 @@ Results and models are available in the [model zoo](docs/en/model_zoo.md).
</tbody>
</table>

| | ResNet | PointNet++ | SECOND | DGCNN | RegNetX | DLA | MinkResNet | Cylinder3D |
| :-----------: | :----: | :--------: | :----: | :---: | :-----: | :-: | :--------: | :--------: |
| SECOND |||||||||
| PointPillars |||||||||
| FreeAnchor |||||||||
| VoteNet |||||||||
| H3DNet |||||||||
| 3DSSD |||||||||
| Part-A2 |||||||||
| MVXNet |||||||||
| CenterPoint |||||||||
| SSN |||||||||
| ImVoteNet |||||||||
| FCOS3D |||||||||
| PointNet++ |||||||||
| Group-Free-3D |||||||||
| ImVoxelNet |||||||||
| PAConv |||||||||
| DGCNN |||||||||
| SMOKE |||||||||
| PGD |||||||||
| MonoFlex |||||||||
| SA-SSD |||||||||
| FCAF3D |||||||||
| PV-RCNN |||||||||
| Cylinder3D |||||||||
| | ResNet | PointNet++ | SECOND | DGCNN | RegNetX | DLA | MinkResNet | Cylinder3D | MinkUNet |
| :-----------: | :----: | :--------: | :----: | :---: | :-----: | :-: | :--------: | :--------: | :------: |
| SECOND ||||||||||
| PointPillars ||||||||||
| FreeAnchor ||||||||||
| VoteNet ||||||||||
| H3DNet ||||||||||
| 3DSSD ||||||||||
| Part-A2 ||||||||||
| MVXNet ||||||||||
| CenterPoint ||||||||||
| SSN ||||||||||
| ImVoteNet ||||||||||
| FCOS3D ||||||||||
| PointNet++ ||||||||||
| Group-Free-3D ||||||||||
| ImVoxelNet ||||||||||
| PAConv ||||||||||
| DGCNN ||||||||||
| SMOKE ||||||||||
| PGD ||||||||||
| MonoFlex ||||||||||
| SA-SSD ||||||||||
| FCAF3D ||||||||||
| PV-RCNN ||||||||||
| Cylinder3D ||||||||||
| MinkUNet ||||||||||

**Note:** All the about **300+ models, methods of 40+ papers** in 2D detection supported by [MMDetection](https://github.com/open-mmlab/mmdetection/blob/3.x/docs/en/model_zoo.md) can be trained or used in this codebase.

Expand Down
55 changes: 29 additions & 26 deletions README_zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -131,6 +131,7 @@ MMDetection3D 是一个基于 PyTorch 的目标检测开源工具箱,下一代
<li><a href="configs/dgcnn">DGCNN (TOG'2019)</a></li>
<li>DLA (CVPR'2018)</li>
<li>MinkResNet (CVPR'2019)</li>
<li><a href="configs/minkunet">MinkUNet (CVPR'2019)</a></li>
<li><a href="configs/cylinder3d">Cylinder3D (CVPR'2021)</a></li>
</ul>
</td>
Expand Down Expand Up @@ -217,6 +218,7 @@ MMDetection3D 是一个基于 PyTorch 的目标检测开源工具箱,下一代
<td>
<li><b>室外</b></li>
<ul>
<li><a href="configs/minkunet">MinkUNet (CVPR'2019)</a></li>
<li><a href="configs/cylinder3d">Cylinder3D (CVPR'2021)</a></li>
</ul>
<li><b>室内</b></li>
Expand All @@ -233,32 +235,33 @@ MMDetection3D 是一个基于 PyTorch 的目标检测开源工具箱,下一代
</tbody>
</table>

| | ResNet | PointNet++ | SECOND | DGCNN | RegNetX | DLA | MinkResNet | Cylinder3D |
| :-----------: | :----: | :--------: | :----: | :---: | :-----: | :-: | :--------: | :--------: |
| SECOND |||||||||
| PointPillars |||||||||
| FreeAnchor |||||||||
| VoteNet |||||||||
| H3DNet |||||||||
| 3DSSD |||||||||
| Part-A2 |||||||||
| MVXNet |||||||||
| CenterPoint |||||||||
| SSN |||||||||
| ImVoteNet |||||||||
| FCOS3D |||||||||
| PointNet++ |||||||||
| Group-Free-3D |||||||||
| ImVoxelNet |||||||||
| PAConv |||||||||
| DGCNN |||||||||
| SMOKE |||||||||
| PGD |||||||||
| MonoFlex |||||||||
| SA-SSD |||||||||
| FCAF3D |||||||||
| PV-RCNN |||||||||
| Cylinder3D |||||||||
| | ResNet | PointNet++ | SECOND | DGCNN | RegNetX | DLA | MinkResNet | Cylinder3D | MinkUNet |
| :-----------: | :----: | :--------: | :----: | :---: | :-----: | :-: | :--------: | :--------: | :------: |
| SECOND ||||||||||
| PointPillars ||||||||||
| FreeAnchor ||||||||||
| VoteNet ||||||||||
| H3DNet ||||||||||
| 3DSSD ||||||||||
| Part-A2 ||||||||||
| MVXNet ||||||||||
| CenterPoint ||||||||||
| SSN ||||||||||
| ImVoteNet ||||||||||
| FCOS3D ||||||||||
| PointNet++ ||||||||||
| Group-Free-3D ||||||||||
| ImVoxelNet ||||||||||
| PAConv ||||||||||
| DGCNN ||||||||||
| SMOKE ||||||||||
| PGD ||||||||||
| MonoFlex ||||||||||
| SA-SSD ||||||||||
| FCAF3D ||||||||||
| PV-RCNN ||||||||||
| Cylinder3D ||||||||||
| MinkUNet ||||||||||

**注意:**[MMDetection](https://github.com/open-mmlab/mmdetection/blob/3.x/docs/zh_cn/model_zoo.md) 支持的基于 2D 检测的 **300+ 个模型,40+ 的论文算法**在 MMDetection3D 中都可以被训练或使用。

Expand Down
43 changes: 43 additions & 0 deletions configs/minkunet/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
# 4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks

> [4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks](https://arxiv.org/abs/1904.08755)
<!-- [ALGORITHM] -->

## Abstract

In many robotics and VR/AR applications, 3D-videos are readily-available sources of input (a continuous sequence of depth images, or LIDAR scans). However, those 3D-videos are processed frame-by-frame either through 2D convnets or 3D perception algorithms. In this work, we propose 4-dimensional convolutional neural networks for spatio-temporal perception that can directly process such 3D-videos using high-dimensional convolutions. For this, we adopt sparse tensors and propose the generalized sparse convolution that encompasses all discrete convolutions. To implement the generalized sparse convolution, we create an open-source auto-differentiation library for sparse tensors that provides extensive functions for high-dimensional convolutional neural networks. We create 4D spatio-temporal convolutional neural networks using the library and validate them on various 3D semantic segmentation benchmarks and proposed 4D datasets for 3D-video perception. To overcome challenges in the 4D space, we propose the hybrid kernel, a special case of the generalized sparse convolution, and the trilateral-stationary conditional random field that enforces spatio-temporal consistency in the 7D space-time-chroma space. Experimentally, we show that convolutional neural networks with only generalized 3D sparse convolutions can outperform 2D or 2D-3D hybrid methods by a large margin. Also, we show that on 3D-videos, 4D spatio-temporal convolutional neural networks are robust to noise, outperform 3D convolutional neural networks and are faster than the 3D counterpart in some cases.

<div align=center>
<img src="https://user-images.githubusercontent.com/72679458/225243534-cd0ed738-4224-4e7c-bcac-4f4c8d89f3a9.png" width="800"/>
</div>

## Introduction

We implement MinkUNet with [TorchSparse](https://github.com/mit-han-lab/torchsparse) backend and provide the result and checkpoints on SemanticKITTI datasets.

## Results and models

### SemanticKITTI

| Method | Lr schd | Mem (GB) | mIoU | Download |
| :----------: | :-----: | :------: | :--: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| MinkUNet-W16 | 15e | 3.4 | 60.3 | [model](https://download.openmmlab.com/mmdetection3d/v1.1.0_models/minkunet/minkunet_w16_8xb2-15e_semantickitti/minkunet_w16_8xb2-15e_semantickitti_20230309_160737-0d8ec25b.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.1.0_models/minkunet/minkunet_w16_8xb2-15e_semantickitti/minkunet_w16_8xb2-15e_semantickitti_20230309_160737.log) |
| MinkUNet-W20 | 15e | 3.7 | 61.6 | [model](https://download.openmmlab.com/mmdetection3d/v1.1.0_models/minkunet/minkunet_w20_8xb2-15e_semantickitti/minkunet_w20_8xb2-15e_semantickitti_20230309_160718-c3b92e6e.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.1.0_models/minkunet/minkunet_w20_8xb2-15e_semantickitti/minkunet_w20_8xb2-15e_semantickitti_20230309_160718.log) |
| MinkUNet-W32 | 15e | 4.9 | 63.1 | [model](https://download.openmmlab.com/mmdetection3d/v1.1.0_models/minkunet/minkunet_w32_8xb2-15e_semantickitti/minkunet_w32_8xb2-15e_semantickitti_20230309_160710-7fa0a6f1.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.1.0_models/minkunet/minkunet_w32_8xb2-15e_semantickitti/minkunet_w32_8xb2-15e_semantickitti_20230309_160710.log) |

**Note:** We follow the implementation in SPVNAS original [repo](https://github.com/mit-han-lab/spvnas) and W16\\W20\\W32 indicates different number of channels.

**Note:** Due to TorchSparse backend, the model performance is unstable with TorchSparse backend and may fluctuate by about 1.5 mIoU for different random seeds.

## Citation

```latex
@inproceedings{choy20194d,
title={4d spatio-temporal convnets: Minkowski convolutional neural networks},
author={Choy, Christopher and Gwak, JunYoung and Savarese, Silvio},
booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
pages={3075--3084},
year={2019}
}
```
Loading

0 comments on commit b481efc

Please sign in to comment.