Skip to content

Commit

Permalink
Merge pull request #2 from TuSimple/code_release
Browse files Browse the repository at this point in the history
initial release
  • Loading branch information
edwardzhou130 authored Sep 30, 2022
2 parents c9b6087 + cdc7521 commit 937c628
Show file tree
Hide file tree
Showing 229 changed files with 32,763 additions and 12 deletions.
61 changes: 49 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,5 @@
# CenterFormer
Implementation for CenterFormer: Center-based Transformer for 3D Object Detection (ECCV 2022)

Code is coming soon!

<p align="center"> <img src='docs/mtf_architecture_eccv.png' align="center" height="500px"> </p>

## Abstract
Query-based transformer has shown great potential in constructing long-range attention in many image-domain tasks, but has rarely been considered in LiDAR-based 3D object detection due to the overwhelming size of the point cloud data. In this paper, we propose **CenterFormer**, a center-based transformer network for 3D object detection. CenterFormer first uses a center heatmap to select center candidates on top of a standard voxel-based point cloud encoder. It then uses the feature of the center candidate as the query embedding in the transformer. To further aggregate features from multiple frames, we design an approach to fuse features through cross-attention. Lastly, regression heads are added to predict the bounding box on the output center feature representation. Our design reduces the convergence difficulty and computational complexity of the transformer structure. The results show significant improvements over the strong baseline of anchor-free object detection networks. CenterFormer achieves state-of-the-art performance for a single model on the Waymo Open Dataset, with 73.7% mAPH on the validation set and 75.6% mAPH on the test set, significantly outperforming all previously published CNN and transformer-based methods.

[arXiv](https://arxiv.org/abs/2209.05588)

## Citation
Official implementation for [**CenterFormer: Center-based Transformer for 3D Object Detection**](https://arxiv.org/abs/2209.05588) (ECCV 2022 Oral)
```
@InProceedings{Zhou_centerformer,
title = {CenterFormer: Center-based Transformer for 3D Object Detection},
Expand All @@ -19,3 +8,51 @@ booktitle = {ECCV},
year = {2022}
}
```

## Highlights
- **Center Transformer** We introduce a center-based transformer network for 3D object detection.

- **Fast and Easy to Train** We use the center feature as the initial query embedding to facilitate learning of the transformer. We propose a multi-scale cross-attention layer to efficiently aggregate neighboring features without significantly increasing the computational complexity.

- **Temporal information**: We propose using the cross-attention transformer to fuse object features from past frames.

<p align="center"> <img src='docs/mtf_architecture_eccv.png' align="center" height="500px"> </p>

## NEWS
[2022-09-30] CenterFormer source code is released.

## Abstract
Query-based transformer has shown great potential in constructing long-range attention in many image-domain tasks, but has rarely been considered in LiDAR-based 3D object detection due to the overwhelming size of the point cloud data. In this paper, we propose **CenterFormer**, a center-based transformer network for 3D object detection. CenterFormer first uses a center heatmap to select center candidates on top of a standard voxel-based point cloud encoder. It then uses the feature of the center candidate as the query embedding in the transformer. To further aggregate features from multiple frames, we design an approach to fuse features through cross-attention. Lastly, regression heads are added to predict the bounding box on the output center feature representation. Our design reduces the convergence difficulty and computational complexity of the transformer structure. The results show significant improvements over the strong baseline of anchor-free object detection networks. CenterFormer achieves state-of-the-art performance for a single model on the Waymo Open Dataset, with 73.7% mAPH on the validation set and 75.6% mAPH on the test set, significantly outperforming all previously published CNN and transformer-based methods.

## Result

#### 3D detection on Waymo test set

| | #Frame | Veh_L2 | Ped_L2 | Cyc_L2 | Mean |
|---------|---------|--------|--------|---------|---------|
| CenterFormer| 8 | 77.7 | 76.6 | 72.4 | 75.6 |
| CenterFormer| 16 | 78.3 | 77.4 | 73.2 | 76.3 |

#### 3D detection on Waymo val set

| | #Frame | Veh_L2 | Ped_L2 | Cyc_L2 | Mean |
|---------|---------|--------|--------|---------|---------|
| [CenterFormer](voxelnet/waymo_centerformer.py)| 1 | 69.4 | 67.7 | 70.2 | 69.1 |
| [CenterFormer deformable](voxelnet/waymo_centerformer_deformable.py)| 1 | 69.7 | 68.3 | 68.8 | 69.0 |
| [CenterFormer](voxelnet/waymo_centerformer_multiframe_2frames.py)| 2 | 71.7 | 73.0 | 72.7 | 72.5 |
| [CenterFormer deformable](voxelnet/waymo_centerformer_multiframe_deformable_2frames.py)| 2 | 71.6 | 73.4 | 73.3 | 72.8 |
| [CenterFormer deformable](voxelnet/waymo_centerformer_multiframe_deformable_4frames.py)| 4 | 72.9 | 74.2 | 72.6 | 73.2 |
| [CenterFormer deformable](voxelnet/waymo_centerformer_multiframe_deformable_8frames.py)| 8 | 73.8 | 75.0 | 72.3 | 73.7 |
| [CenterFormer deformable](voxelnet/waymo_centerformer_multiframe_deformable_16frames.py)| 16 | 74.6 | 75.6 | 72.7 | 74.3 |

The training and evaluation configs of the above models are provided in [Configs](configs/waymo/README.md).

## Installation
Please refer to [INSTALL](docs/INSTALL.md) to set up libraries needed for distributed training and sparse convolution.

## Training and Evaluation
Please refer to [WAYMO](docs/WAYMO.md) to prepare the data, training and evaluation.


## Acknowlegement
This project is developed based on the [CenterPoint](https://github.com/tianweiy/CenterPoint) codebase. We use the deformable cross-attention implementation from [Deformable-DETR](https://github.com/fundamentalvision/Deformable-DETR).
22 changes: 22 additions & 0 deletions configs/waymo/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# Configs

### Common settings and notes

- The experiments are run with PyTorch 1.9 and CUDA 11.1.
- The training is conducted on 8 A100 GPUs.
- Training on GPU with less memory would likely cause GPU out-of-memory. In this case, you can try configs with smaller batch size or frames.


### Waymo Validation Results

We provide the training and validation configs for the model in our paper. Let us know if you have trouble reproducing the results.

| | #Frame | Veh_L2 | Ped_L2 | Cyc_L2 | Mean |
|---------|---------|--------|--------|---------|---------|
| [CenterFormer](voxelnet/waymo_centerformer.py)| 1 | 69.4 | 67.7 | 70.2 | 69.1 |
| [CenterFormer deformable](voxelnet/waymo_centerformer_deformable.py)| 1 | 69.7 | 68.3 | 68.8 | 69.0 |
| [CenterFormer](voxelnet/waymo_centerformer_multiframe_2frames.py)| 2 | 71.7 | 73.0 | 72.7 | 72.5 |
| [CenterFormer deformable](voxelnet/waymo_centerformer_multiframe_deformable_2frames.py)| 2 | 71.6 | 73.4 | 73.3 | 72.8 |
| [CenterFormer deformable](voxelnet/waymo_centerformer_multiframe_deformable_4frames.py)| 4 | 72.9 | 74.2 | 72.6 | 73.2 |
| [CenterFormer deformable](voxelnet/waymo_centerformer_multiframe_deformable_8frames.py)| 8 | 73.8 | 75.0 | 72.3 | 73.7 |
| [CenterFormer deformable](voxelnet/waymo_centerformer_multiframe_deformable_16frames.py)| 16 | 74.6 | 75.6 | 72.7 | 74.3 |
232 changes: 232 additions & 0 deletions configs/waymo/voxelnet/waymo_centerformer.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,232 @@
import itertools
import logging

from det3d.utils.config_tool import get_downsample_factor

tasks = [
dict(num_class=3, class_names=['VEHICLE', 'PEDESTRIAN', 'CYCLIST']),
]

class_names = list(itertools.chain(*[t["class_names"] for t in tasks]))

# training and testing settings
target_assigner = dict(
tasks=tasks,
)

# use expanded gt label assigner
window_size = 1

# model settings
model = dict(
type="VoxelNet_dynamic",
pretrained=None,
reader=dict(
type="DynamicVoxelEncoder",
pc_range=[-75.2, -75.2, -2, 75.2, 75.2, 4],
voxel_size=[0.1, 0.1, 0.15],
),
backbone=dict(
type="SpMiddleResNetFHD", num_input_features=5, ds_factor=8),
neck=dict(
type="RPN_transformer",
layer_nums=[5, 5, 1],
ds_num_filters=[256, 256, 128],
num_input_features=256,
use_gt_training=True,
corner = True,
obj_num= 500,
assign_label_window_size=window_size,
transformer_config=dict(
depth = 3,
heads = 4,
dim_head = 64,
MLP_dim = 256,
DP_rate=0.3,
out_att = False,
cross_attention_kernel_size = [3,3,3]
),
logger=logging.getLogger("RPN"),
),
bbox_head=dict(
type="CenterHeadIoU_1d",
in_channels=256,
tasks=tasks,
dataset='waymo',
weight=2,
assign_label_window_size=window_size,
corner_loss=True,
iou_loss=True,
code_weights=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
common_heads={'reg': (2, 2), 'height': (1, 2), 'dim':(3, 2), 'rot':(2, 2), 'iou':(1,2)}, # (output_channel, num_conv)
),
)

assigner = dict(
target_assigner=target_assigner,
out_size_factor=4,
dense_reg=1,
gaussian_overlap=0.1,
max_objs=500,
min_radius=2,
gt_kernel_size=window_size,
corner_prediction=True,
pc_range=[-75.2, -75.2, -2, 75.2, 75.2, 4],
voxel_size=[0.1, 0.1, 0.15],
)


train_cfg = dict(assigner=assigner)


test_cfg = dict(
post_center_limit_range=[-80, -80, -10.0, 80, 80, 10.0],
nms=dict(
use_rotate_nms=False,
use_multi_class_nms=True,
nms_pre_max_size=[1600,1600,800],
nms_post_max_size=[200,200,100],
nms_iou_threshold=[0.8,0.55,0.55],
),
score_threshold=0.1,
pc_range=[-75.2, -75.2],
out_size_factor=4,
voxel_size=[0.1, 0.1],
obj_num= 1000,
)


# dataset settings
dataset_type = "WaymoDataset"
nsweeps = 1
data_root = "data/Waymo"

db_sampler = dict(
type="GT-AUG",
enable=False,
db_info_path="data/Waymo/dbinfos_train_1sweeps_withvelo.pkl",
sample_groups=[
dict(VEHICLE=15),
dict(PEDESTRIAN=10),
dict(CYCLIST=10),
],
db_prep_steps=[
dict(
filter_by_min_num_points=dict(
VEHICLE=5,
PEDESTRIAN=5,
CYCLIST=5,
)
),
dict(filter_by_difficulty=[-1],),
],
global_random_rotation_range_per_object=[0, 0],
rate=1.0,
)

train_preprocessor = dict(
mode="train",
shuffle_points=True,
global_rot_noise=[-0.78539816, 0.78539816],
global_scale_noise=[0.95, 1.05],
global_translate_noise=0.5,
db_sampler=db_sampler,
class_names=class_names,
)
val_preprocessor = dict(
mode="val",
shuffle_points=False,
)

voxel_generator = dict(
range=[-75.2, -75.2, -2, 75.2, 75.2, 4],
voxel_size=[0.1, 0.1, 0.15],
max_points_in_voxel=5,
max_voxel_num=[150000, 200000],
)

train_pipeline = [
dict(type="LoadPointCloudFromFile", dataset=dataset_type),
dict(type="LoadPointCloudAnnotations", with_bbox=True),
dict(type="Preprocess", cfg=train_preprocessor),
dict(type="AssignLabel", cfg=train_cfg["assigner"]),
dict(type="Reformat"),
]
test_pipeline = [
dict(type="LoadPointCloudFromFile", dataset=dataset_type),
dict(type="LoadPointCloudAnnotations", with_bbox=True),
dict(type="Preprocess", cfg=val_preprocessor),
dict(type="AssignLabel", cfg=train_cfg["assigner"]),
dict(type="Reformat"),
]

train_anno = "data/Waymo/infos_train_01sweeps_filter_zero_gt.pkl"
val_anno = "data/Waymo/infos_val_01sweeps_filter_zero_gt.pkl"
test_anno = 'data/Waymo/infos_test_01sweeps_filter_zero_gt.pkl'

data = dict(
samples_per_gpu=4,
workers_per_gpu=6,
train=dict(
type=dataset_type,
root_path=data_root,
info_path=train_anno,
ann_file=train_anno,
nsweeps=nsweeps,
# load_interval=5,
class_names=class_names,
pipeline=train_pipeline,
),
val=dict(
type=dataset_type,
root_path=data_root,
info_path=val_anno,
test_mode=True,
ann_file=val_anno,
nsweeps=nsweeps,
class_names=class_names,
pipeline=test_pipeline,
),
test=dict(
type=dataset_type,
root_path=data_root,
info_path=test_anno,
ann_file=test_anno,
nsweeps=nsweeps,
class_names=class_names,
pipeline=test_pipeline,
),
)



optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2))

# optimizer
optimizer = dict(
type="adam", amsgrad=0.0, wd=0.01, fixed_wd=True, moving_average=False,
)
lr_config = dict(
type="one_cycle", lr_max=0.003, moms=[0.95, 0.85], div_factor=10.0, pct_start=0.4,
)

checkpoint_config = dict(interval=1)
# yapf:disable
log_config = dict(
interval=5,
hooks=[
dict(type="TextLoggerHook"),
# dict(type='TensorboardLoggerHook')
],
)
# yapf:enable
# runtime settings
total_epochs = 20
disable_dbsampler_after_epoch = 15
device_ids = range(8)
dist_params = dict(backend="nccl", init_method="env://")
log_level = "INFO"
work_dir = './work_dirs/{}/'.format(__file__[__file__.rfind('/') + 1:-3])
load_from = None
resume_from = None
workflow = [('train', 1)]
Loading

0 comments on commit 937c628

Please sign in to comment.