Skip to content

[NeurIPS 2024] Lexicon3D: Probing Visual Foundation Models for Complex 3D Scene Understanding

License

Notifications You must be signed in to change notification settings

YunzeMan/Lexicon3D

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Lexicon3D: Probing Visual Foundation Models for Complex 3D Scene Understanding

Yunze Man · Shuhong Zheng · Zhipeng Bao · Martial Hebert · Liang-Yan Gui · Yu-Xiong Wang

[NeurIPS 2024] [Project Page] [arXiv] [pdf] [BibTeX]

Framework: PyTorch arXiv Project GitHub License

This repository contains the official PyTorch implementation of the paper "Lexicon3D: Probing Visual Foundation Models for Complex 3D Scene Understanding". The paper is available on arXiv. The project page is online at here. This work is accepted by NeurIPS 2024.

About

We design a unified framework, as shown in the Figure above, to extract features from different foundation models, construct a 3D feature embedding as scene embeddings, and evaluate them on multiple downstream tasks. For a complex indoor scene, existing work usually represents it with a combination of 2D and 3D modalities. Given a complex scene represented in posed images, videos, and 3D point clouds, we extract their feature embeddings with a collection of vision foundation models. For image- and video-based models, we project their features into 3D space for the subsequent 3D scene evaluation tasks with a multi-view 3D projection module.

We also visualize the scene features extracted by the vision foundation models.

BibTeX

If you use our work in your research, please cite our publication:

@inproceedings{man2024lexicon3d,
      title={Lexicon3D: Probing Visual Foundation Models for Complex 3D Scene Understanding},
      author={Man, Yunze and Zheng, Shuhong and Bao, Zhipeng and Hebert, Martial and Gui, Liang-Yan and Wang, Yu-Xiong},
      booktitle={Advances in Neural Information Processing Systems},
      year={2024} 
      }

News

  • 12/03/2024: Update the evaluation protocol, and update the model documentation.
  • 09/25/2024: Our paper is accepted by NeurIPS 2024.
  • 09/05/2024: Release the foundation model feature extraction and fusion scripts.
  • 06/24/2024: GitHub page initialization.

Environment Setup

Please install the required packages and dependencies according to the requirements.txt file.

In addition,

  • in order to use the LSeg model, please follow this repo to install the necessary dependencies.
  • in order to use the Swin3D model, please follow this repo and this repo to install the necessary dependencies.

Dataset Preparation. Download the ScanNet dataset from the official repository and follow the instructions here to preprocess the ScanNet dataset and get RGB video frames and point clouds for each scannet scene.

Feature Extraction

To extract features from the foundation models, please run the corresponding scripts in the lexicon3d folder. For example, to extract features from the LSeg model, please run the following command:

python fusion_scannet_clip.py  --data_dir dataset/ScanNet/openscene/  --output_dir  dataset/lexicon3d/clip/ --split train --prefix clip

This script will extract features from the LSeg model for the ScanNet dataset. The extracted features will be saved in the output_dir folder, containing the feature embeddings, points, and voxel grids.

Evaluation on Downstream Tasks

For evaluation, we provide the scripts to evaluate the extracted features on the downstream tasks. Detailed instructions can be found in the evals folder. For example, to evaluate the extracted features on the 3D Question Answering task, please cd to the evals/3D-LLM/3DLLM_BLIP2-base folder and run the following command:

python -m torch.distributed.run --nproc_per_node=4 train.py --cfg-path lavis/projects/blip2/train/finetune_sqa.yaml

Refer to the evals folder for more details on the evaluation scripts.

Acknowledgements

This repo is built based on the fantastic work OpenScene. We also thank the authors of P3DA and the authors of all relevant visual foundation models for their great work and open-sourcing their codebase.

About

[NeurIPS 2024] Lexicon3D: Probing Visual Foundation Models for Complex 3D Scene Understanding

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages