Skip to content

IRMVLab/Point-Mamba

Repository files navigation

Point Mamba

A Novel Point Cloud Backbone Based on State Space Model with Octree-Based Ordering Strategy

Overview

Citation

If you find this project useful for your work, please consider citing:

@misc{liu2024point,
     title={Point Mamba: A Novel Point Cloud Backbone Based on State Space Model with Octree-Based Ordering Strategy}, 
     author={Jiuming Liu and Ruiji Yu and Yian Wang and Yu Zheng and Tianchen Deng and Weicai Ye and Hesheng Wang},
     year={2024},
     eprint={2403.06467},
     archivePrefix={arXiv},
     primaryClass={cs.CV}
}

News

  • 2024/3/11 Release our code and checkpoint for semantic segmentation on Scannet!

1. Environment

The code has been tested on Ubuntu 20.04 with 3 Nvidia 4090 GPUs (24GB memory).

  1. Python 3.10.13

    conda create -n your_env_name python=3.10.13
  2. Install torch 2.1.1 + cu118

    pip install torch==2.1.1 torchvision==0.16.1 torchaudio==2.1.1 --index-url https://download.pytorch.org/whl/cu118
  3. Clone this repository and install the requirements.

    pip install -r requirements.txt
  4. Install the library for octree-based depthwise convolution.

    git clone https://github.com/octree-nn/dwconv.git
    pip install ./dwconv
  5. Install causal-conv1d and mamba, which you can download in this link.

    pip install -e causal-conv1d
    pip install -e mamba

2. ScanNet Segmentation

  1. Data: Download the data from the ScanNet benchmark. Unzip the data and place it to the folder <scannet_folder>. Run the following command to prepare the dataset.

    python tools/seg_scannet.py --run process_scannet --path_in scannet

    The filelist should be like this:

    ├── scannet
    │ ├── scans
    │ │ ├── [scene_id]
    │ │ │ ├── [scene_id].aggregation.json
    │ │ │ ├── [scene_id].txt
    │ │ │ ├── [scene_id]_vh_clean.aggregation.json
    │ │ │ ├── [scene_id]_vh_clean.segs.json
    │ │ │ ├── [scene_id]_vh_clean_2.0.010000.segs.json
    │ │ │ ├── [scene_id]_vh_clean_2.labels.ply
    │ │ │ ├── [scene_id]_vh_clean_2.ply
    │ ├── scans_test
    │ │ ├── [scene_id]
    │ │ │ ├── [scene_id].aggregation.json
    │ │ │ ├── [scene_id].txt
    │ │ │ ├── [scene_id]_vh_clean.aggregation.json
    │ │ │ ├── [scene_id]_vh_clean.segs.json
    │ │ │ ├── [scene_id]_vh_clean_2.0.010000.segs.json
    │ │ │ ├── [scene_id]_vh_clean_2.ply
    │ ├── scannetv2-labels.combined.tsv
  2. Train: Run the following command to train the network with 3 GPUs and port 10001. The mIoU on the validation set without voting is 75.0. And the training log and weights can be download in link

    python scripts/run_seg_scannet.py --gpu 0,1,2 --alias scannet --port 10001
  3. Evaluate: Run the following command to get the per-point predictions for the validation dataset with a voting strategy. And after voting, the mIoU is 75.7 on the validation dataset.

    python scripts/run_seg_scannet.py --gpu 0 --alias scannet --run validate

3. ModelNet40 Classification (Point Mamba(O))

  1. Data: Run the following command to prepare the dataset.

    python tools/cls_modelnet.py
  2. Train: Run the following command to train the network with 1 GPU. The classification accuracy on the testing set without voting is 92.7%. The code for Point Mamba(C) will be released in another branch later. Checkpoints will be released later.

    python classification.py --config configs/cls_m40.yaml SOLVER.gpu 0,

4. Acknowledgement

Our project is based on

Thanks for their wonderful works!

Releases

No releases published

Packages

No packages published

Languages