Skip to content

Official Implementation of RTGS: Enabling Real-Time Gaussian Splatting on Mobile Devices Using Efficiency-Guided Pruning and Foveated Rendering.

License

Notifications You must be signed in to change notification settings

horizon-research/Fov-3DGS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FoV-3DGS

Official Implementation of MetaSapiens: Real-Time Neural Rendering with Efficiency-Aware Pruning and Accelerated Foveated Rendering (ASPLOS 2025)

(1) Setup

  • Clone the repo
git clone https://github.com/horizon-research/FoV-3DGS.git
  • Prepare Dataset

  • Prepare Dense 3DGS for pruning

    • Original 3DGS: download from 3DGS
    • Mini-Splatting-D: We provide our reprodeuced model of m360 bicycle here, for other scenes, you can reproduce using their code.
    • move the dense model to the scene folder and name it "ms_d", structure under the folder should be like:
    |-- cameras.json
    |-- cfg_args
    |-- chkpnt30000.pth
    |-- input.ply
    `-- point_cloud
        `-- iteration_30000
            `-- point_cloud.ply
  • Prepare environment:

    • We use docker:
    # pull the docker (this is for x86 machine, for jetson you will need other prbuilt, see https://github.com/dusty-nv/jetson-containers/tree/master and find one that suitable for tour jetpack.)
    docker pull pytorch/pytorch:2.3.0-cuda11.8-cudnn8-devel
    # run docker
    bash ./run_docker.sh
    # go in to docker and install all submodules
    bash update_submodules.sh
    • install some packages
        pip install plyfile opencv-python matplotlib icecream
        apt-get update
        apt-get install libgl1-mesa-glx libglib2.0-0 -y

(2) Run the pruning & FR (Foveated Rendering) Masking pipeline

# we only leave bicycle, uncomment other scenes for batch test
 python3 combined_training_script.py 

The result will be stored in the scene folder.

(3) Measure Objective Metric for PS=1 model.

python3 quality_eval.py 

The result will be in ./full_eval_results/ours-Q

(4) Generate the FoV model & Measure its FPS

bash batch_ours_fps.sh 

Baselines

  • SM (Shared Model) FR: Generate + Measure the Layer-Wise Quality & FPS
# we only leave bicycle, uncomment other scenes for batch test
bash batch_gen_naive_FR.sh # generate SMFR
python3 quality_eval_layers_naive.py # measure qulaity in each layer, result will be in ./layers_eval_results/naiveFR
bash batch_naive_fps.sh #measure fps, result will be in ./fps
  • MM (Multi-Model) FR : Generate + Measure the Layer-Wise Quality & FPS this one need LightGS for Pruning Multiple Models, we already include it in our repo
bash batch_pnum_analyzer.sh  # analyze pnum of our model in each layer
cd ../LightGaussian
bash ./batch_gen_mmFR.sh # the result will be in ./MMFR/ours-Q
cd ../fov3dgs
python3 quality_eval_layers_mmfr.py # measure qulaity in each layer, result will be in ./layers_eval_results/MMFR
bash ./batch_mmfr_fps.sh #measure fps, result will be in ./fps

Acknowledgements

  • Our 3D Gaussian Splatting (3DGS) related code is based on the work from 3DGS.
  • Our Human Visual System (HVS) model code is adapted from the Perception library in Odak.

About

Official Implementation of RTGS: Enabling Real-Time Gaussian Splatting on Mobile Devices Using Efficiency-Guided Pruning and Foveated Rendering.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published