Skip to content

Latest commit

 

History

History
92 lines (65 loc) · 3.15 KB

readme.md

File metadata and controls

92 lines (65 loc) · 3.15 KB

Gamba

This is the official implementation of Gamba: Marry Gaussian Splatting with Mamba for single view 3D reconstruction.

Why Gamba

🔥 Reconstruct 3D object from a single image input within 50 milliseconds.

🔥 First end-to-end trainable single-view reconstruction model with 3DGS.

gamba-teaser.mp4

Install

# xformers is required! please refer to https://github.com/facebookresearch/xformers for details.
# for example, we use torch 2.1.0 + cuda 11.8
pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu118
pip install causal-conv1d==1.2.0 mamba-ssm
git clone --recursive git@github.com:SkyworkAI/Gamba.git
# a modified gaussian splatting (+ depth, alpha rendering)
pip install ./submodules/diff-gaussian-rasterization
# radial polygon mask, only in training,
pip install ./submodules/rad-polygon-mask

# for mesh extraction
pip install git+https://github.com/NVlabs/nvdiffrast

# other dependencies
pip install -r requirements.txt

Pretrained Weights

Our pretrained weight can be downloaded from huggingface. A lager Model is comming on the way!

For example, to download the bf16 model for inference:

mkdir checkpoint && cd checkpoint
wget https://huggingface.co/florinshen/Gamba/resolve/main/gamba_ep399.pth
cd ..

Inference

Inference takes about 1.5GB GPU memory within 50 milliseconds.

bash scripts/test.sh

For more options, please check options.

Training

We will update training tutorials soon.

Acknowledgement

This work is built on many amazing research works and open-source projects, thanks a lot to all the authors for sharing!

Citation

@article{shen2024gamba,
  title={Gamba: Marry gaussian splatting with mamba for single view 3d reconstruction},
  author={Shen, Qiuhong and Wu, Zike and Yi, Xuanyu and Zhou, Pan and Zhang, Hanwang and Yan, Shuicheng and Wang, Xinchao},
  journal={arXiv preprint arXiv:2403.18795},
  year={2024}
}

Please also check our another project for unified 3D generation MVGamba. The code and pretrained weights will also be released soon.

@article{yi2024mvgamba,
  title={MVGamba: Unify 3D Content Generation as State Space Sequence Modeling},
  author={Yi, Xuanyu and Wu, Zike and Shen, Qiuhong and Xu, Qingshan and Zhou, Pan and Lim, Joo-Hwee and Yan, Shuicheng and Wang, Xinchao and Zhang, Hanwang},
  journal={arXiv preprint arXiv:2406.06367},
  year={2024}
}