Modules, operators and utilities for 3D neural rendering in single-object, multi-object, categorical and large-scale scenes.
Pull requests and collaborations are warmly welcomed π€! Please follow our code style if you want to make any contribution.
Feel free to open an issue or contact Jianfei Guo at ffventus@gmail.com if you have any questions or proposals.
- python >= 3.8
- Pytorch >= 1.10 && !=1.12 &&
- also works for pytorch >= 2.
- CUDA dev >= 10.0
- need to match the major CUDA version that your Pytorch built with
An example of our platform (python=3.8, pytorch=1.11, cuda=11.3 / 11.7):
conda create -n nr3d python=3.8
conda activate nr3d
conda install pytorch==1.11.0 torchvision==0.12.0 torchaudio==0.11.0 cudatoolkit=11.3 -c pytorch
- pytorch_scatter
conda install pytorch-scatter -c pyg
- other pip packages
pip install opencv-python-headless kornia imagesize omegaconf addict \
imageio imageio-ffmpeg scikit-image scikit-learn pyyaml pynvml psutil \
seaborn==0.12.0 trimesh plyfile ninja icecream tqdm plyfile tensorboard \
torchmetrics
cd
to the nr3d_lib
directory, and then: (Notice the trailing dot .
)
pip install -v .
π NOTE: For pytorch>=2.2, c++17 standard is required --- in this case, you can run
USE_CPP17=1 pip install -v .
Optional functionalities
-
Visualization
pip install open3d vedo==2023.4.6 mayavi
-
tiny-cuda-nn backends
pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
orpip install git+https://github.com/PJLab-ADG/NeuS2_TCNN/#subdirectory=bindings/torch
which supports double-backward of fused_mlp (still buggy)
-
GUI support (Experimental)
-
# opengl pip install pyopengl # imgui pip install imgui # glumpy pip install git+https://github.com/glumpy/glumpy.git@46a7635c08d3a200478397edbe0371a6c59cd9d7#egg=glumpy # pycuda git clone https://github.com/inducer/pycuda cd pycuda ./configure.py --cuda-root=/usr/local/cuda --cuda-enable-gl python setup.py install
-
- LoTD Levels of Tensorial Decomposition
- pack_ops Pack-wise operations for packed tensors
- occ_grids Occupancy accelerates ray marching
- attributes Unified API framework for scene node attributes
- fields Implicit representations
- Code: models/grids/lotd
- Supported scenes:
- Single scene
- Batched / categorical scene;
- Large-scale scene
- Main feature
- Support different layer using different types
- Support different layer using different widths (n_feats)
- All types support cuboid resolutions
- All types support forward, first-order gradients and second-order gradients
- All types support batched encoding: inference with batched inputs or batch_inds
- All types support large-scale scene representation
- Supported LoTD Types and calculations of forward, gradients (
dLd[]
) and second-order gradients (d(dLdx)d[]
)
π All implemented with Pytorch-CUDA extension | dimension | forward | dL dparam |
dL dx |
d(dLdx) d(param) |
d(dLdx) d(dLdy) |
d(dLdx) dx |
---|---|---|---|---|---|---|---|
Dense |
2-4 | β | β | β | β | β | β |
Hash hash-grids in NGP |
2-4 | β | β | β | β | β | β |
VectorMatrix or VM Vector-Matrix in TensoRF |
3 | β | β | β | β | β | β |
VecZMatXoY modified from TensoRF using only xoy mat and z vector. |
3 | β | β | β | β | β | β |
CP CP in TensoRF |
2-4 | β | β | β | β | β | β |
NPlaneSum "TriPlane" in EG3D |
3-4 | β | β | β | β | β | β |
NPlaneMul |
3-4 | β | β | β | β | β | β |
- A demo config yaml with all cubic resolution:
lod_res: [32, 64, 128, 256, 512, 1024, 2048, 4096]
lod_n_feats: [4, 4, 8, 4, 2, 16, 8, 4]
lod_types: [Dense, Dense, VM, VM, VM, CP, CP, CP]
- A demo config yaml with all cuboid resolution (usually auto-computed in practice):
lod_res: [[144, 56, 18], [199, 77, 25], [275, 107, 34], [380, 148, 47], [525, 204, 65], [726, 282, 91], [1004, 390, 126], [1387, 539, 174]]
lod_n_feats: [4, 4, 4, 4, 2, 2, 2, 2]
lod_types: [Dense, Dense, Hash, Hash, Hash, Hash, Hash, Hash]
log2_hashmap_size: 19
Check out docs/pack_ops.md for more!
Code: render/pack_ops
Code: render/raymarch/occgrid_raymarch.py
This part is primarily borrowed and modified from nerfacc
- Support single scene
- Support batched / categorical scene
- Support large-scale scene
- Efficient multi-stage hierarchical ray marching on occupancy grids
- introduced in StreetSurf paper section 4.1
- implementation in models/fields/neus/renderer_mixin.py
- batched implementation in models/fields_conditional/neus/renderer_mixin.py
- large-scale implementation is still WIP...
Code: models/attributes
We extend pytorch.Tensor
to represent common types of data involved in 3D neural rendering, e.g. transforms (SO3, SE3) and camera models (pinhole, OpenCV, fisheye), in order to eliminate concerns for tensor shapes, different variants and gradients and only expose common APIs regardless of the underlying implementation.
These data types could have multiple variants but with the same way to use. For example, SE3 can be represented by RT matrices, 4x4 matrix, or exponential coordinates, and let alone the different representations of the underlying SO3 (quaternions, axis-angles, Euler angles...) when using RT as SE3. But when it comes to usage, the APIs are the same, e.g. transform()
, rotate()
, mat_3x4()
, mat_4x4()
, inv()
, default transform, etc. In addition, there could also be complex data prefix like [4,4]
or [B,4,4]
or [N,B,4,4]
etc. Once implemented under our framework and settings, you need only care about the APIs and can forget all the underlying calculations and tensor shape rearrangements.
You can check out models/attributes/transform.py
for better understanding. Another example is models/attributes/camera_param.py
.
Most of the basic pytorch.Tensor
operations are implemented for Attr
and AttrNested
, e.g. slicing (support arbitrary slice with :
and ...
), indexing, .to()
, .clone()
, .stack()
, .concat()
. Gradient flows and nn.Parameters()
, nn.Buffer()
are also kept / supported if needed.
Code: models/fields
-
sdf
- LoTD encoding [lotd_sdf.py]
- Permuto encoding [permuto_sdf.py]
- Basic MLP [mlp_sdf.py]
-
neus
- LoTD encoding [lotd_neus.py]
- Permuto encoding [permuto_neus.py]
- Basic MLP [mlp_neus.py]
- models/fields/neus/renderer_mixin.py Multi-stage hierarchical sampling on occupancy grids
-
nerf
:- EmerNeRF [emernerf.py]
- LoTD encoding [lotd_nerf.py]
- Basic MLP [mlp_nerf.py]
- NeRF++ (codes in models/fields_distant)
Code: models/fields_conditional
neus
:- Generative permuto encoding [generative_permuto_neus.py]
- Hypernetwork thats grows LoTD encoding [style_lotd_neus.py]
Code: models/fields_forest
neus
:- Multi-continuous-block (aka. Forest) NeuS [lotd_forest_neus.py]
sdf
:- Multi-continuous-block (aka. Forest) SDF [lotd_forest_sdf.py]
- plot 2d & 3d plotting tools for developers
- models/importance.py errormap update & 2D importance sampling (inverse 2D cdf sampling); modified from NGP and re-implemented in PyTorch
- Release batched ray marching
- Release LoTD-Growers and Style-LoTD-NeuS
- Release large-scale representation, large-scale ray marching and large-scale neus
- Implement dmtet
- Implement permuto-SDF
- Basic examples & tutorials
- How to use single / batched / large-scale LoTD
- Example on batched ray marching & batched LoTD inference
- Example on efficient multi-stage hierarchical sampling based on occupancy grids
If you find this library useful, please cite our paper introducing pack_ops, cuboid hashgrids and efficient neus rendering.
@article{guo2023streetsurf,
title = {StreetSurf: Extending Multi-view Implicit Surface Reconstruction to Street Views},
author = {Guo, Jianfei and Deng, Nianchen and Li, Xinyang and Bai, Yeqi and Shi, Botian and Wang, Chiyu and Ding, Chenjing and Wang, Dongliang and Li, Yikang},
journal = {arXiv preprint arXiv:2306.04988},
year = {2023}
}