You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Summary: Migrate away from NDCGridRaysampler and GridRaysampler to their more flexible replacements.
Reviewed By: patricklabatut
Differential Revision: D33281584
fbshipit-source-id: 65f8702e700a32d38f7cd6bda3924bb1707a0633
Copy file name to clipboardExpand all lines: docs/tutorials/fit_simple_neural_radiance_field.ipynb
+6-6
Original file line number
Diff line number
Diff line change
@@ -100,7 +100,7 @@
100
100
"from pytorch3d.transforms import so3_exp_map\n",
101
101
"from pytorch3d.renderer import (\n",
102
102
" FoVPerspectiveCameras, \n",
103
-
"NDCGridRaysampler,\n",
103
+
"NDCMultinomialRaysampler,\n",
104
104
" MonteCarloRaysampler,\n",
105
105
" EmissionAbsorptionRaymarcher,\n",
106
106
" ImplicitRenderer,\n",
@@ -186,7 +186,7 @@
186
186
"The renderer is composed of a *raymarcher* and a *raysampler*.\n",
187
187
"- The *raysampler* is responsible for emitting rays from image pixels and sampling the points along them. Here, we use two different raysamplers:\n",
188
188
" - `MonteCarloRaysampler` is used to generate rays from a random subset of pixels of the image plane. The random subsampling of pixels is carried out during **training** to decrease the memory consumption of the implicit model.\n",
189
-
" - `NDCGridRaysampler` which follows the standard PyTorch3D coordinate grid convention (+X from right to left; +Y from bottom to top; +Z away from the user). In combination with the implicit model of the scene, `NDCGridRaysampler` consumes a large amount of memory and, hence, is only used for visualizing the results of the training at **test** time.\n",
189
+
" - `NDCMultinomialRaysampler` which follows the standard PyTorch3D coordinate grid convention (+X from right to left; +Y from bottom to top; +Z away from the user). In combination with the implicit model of the scene, `NDCMultinomialRaysampler` consumes a large amount of memory and, hence, is only used for visualizing the results of the training at **test** time.\n",
190
190
"- The *raymarcher* takes the densities and colors sampled along each ray and renders each ray into a color and an opacity value of the ray's source pixel. Here we use the `EmissionAbsorptionRaymarcher` which implements the standard Emission-Absorption raymarching algorithm."
191
191
]
192
192
},
@@ -211,10 +211,10 @@
211
211
"\n",
212
212
"# 1) Instantiate the raysamplers.\n",
213
213
"\n",
214
-
"# Here, NDCGridRaysampler generates a rectangular image\n",
214
+
"# Here, NDCMultinomialRaysampler generates a rectangular image\n",
215
215
"# grid of rays whose coordinates follow the PyTorch3D\n",
216
216
"# coordinate conventions.\n",
217
-
"raysampler_grid = NDCGridRaysampler(\n",
217
+
"raysampler_grid = NDCMultinomialRaysampler(\n",
218
218
" image_height=render_size,\n",
219
219
" image_width=render_size,\n",
220
220
" n_pts_per_ray=128,\n",
@@ -844,7 +844,7 @@
844
844
" fov=target_cameras.fov[0],\n",
845
845
" device=device,\n",
846
846
" )\n",
847
-
" # Note that we again render with `NDCGridRaySampler`\n",
847
+
" # Note that we again render with `NDCMultinomialRaysampler`\n",
848
848
" # and the batched_forward function of neural_radiance_field.\n",
849
849
" frames.append(\n",
850
850
" renderer_grid(\n",
@@ -867,7 +867,7 @@
867
867
"source": [
868
868
"## 6. Conclusion\n",
869
869
"\n",
870
-
"In this tutorial, we have shown how to optimize an implicit representation of a scene such that the renders of the scene from known viewpoints match the observed images for each viewpoint. The rendering was carried out using the PyTorch3D's implicit function renderer composed of either a `MonteCarloRaysampler` or `NDCGridRaysampler`, and an `EmissionAbsorptionRaymarcher`."
870
+
"In this tutorial, we have shown how to optimize an implicit representation of a scene such that the renders of the scene from known viewpoints match the observed images for each viewpoint. The rendering was carried out using the PyTorch3D's implicit function renderer composed of either a `MonteCarloRaysampler` or `NDCMultinomialRaysampler`, and an `EmissionAbsorptionRaymarcher`."
Copy file name to clipboardExpand all lines: docs/tutorials/fit_textured_volume.ipynb
+5-5
Original file line number
Diff line number
Diff line change
@@ -89,7 +89,7 @@
89
89
"from pytorch3d.renderer import (\n",
90
90
" FoVPerspectiveCameras, \n",
91
91
" VolumeRenderer,\n",
92
-
"NDCGridRaysampler,\n",
92
+
"NDCMultinomialRaysampler,\n",
93
93
" EmissionAbsorptionRaymarcher\n",
94
94
")\n",
95
95
"from pytorch3d.transforms import so3_exp_map\n",
@@ -164,7 +164,7 @@
164
164
"The following initializes a volumetric renderer that emits a ray from each pixel of a target image and samples a set of uniformly-spaced points along the ray. At each ray-point, the corresponding density and color value is obtained by querying the corresponding location in the volumetric model of the scene (the model is described & instantiated in a later cell).\n",
165
165
"\n",
166
166
"The renderer is composed of a *raymarcher* and a *raysampler*.\n",
167
-
"- The *raysampler* is responsible for emitting rays from image pixels and sampling the points along them. Here, we use the `NDCGridRaysampler` which follows the standard PyTorch3D coordinate grid convention (+X from right to left; +Y from bottom to top; +Z away from the user).\n",
167
+
"- The *raysampler* is responsible for emitting rays from image pixels and sampling the points along them. Here, we use the `NDCMultinomialRaysampler` which follows the standard PyTorch3D coordinate grid convention (+X from right to left; +Y from bottom to top; +Z away from the user).\n",
168
168
"- The *raymarcher* takes the densities and colors sampled along each ray and renders each ray into a color and an opacity value of the ray's source pixel. Here we use the `EmissionAbsorptionRaymarcher` which implements the standard Emission-Absorption raymarching algorithm."
169
169
]
170
170
},
@@ -186,14 +186,14 @@
186
186
"volume_extent_world = 3.0\n",
187
187
"\n",
188
188
"# 1) Instantiate the raysampler.\n",
189
-
"# Here, NDCGridRaysampler generates a rectangular image\n",
189
+
"# Here, NDCMultinomialRaysampler generates a rectangular image\n",
190
190
"# grid of rays whose coordinates follow the PyTorch3D\n",
191
191
"# coordinate conventions.\n",
192
192
"# Since we use a volume of size 128^3, we sample n_pts_per_ray=150,\n",
193
193
"# which roughly corresponds to a one ray-point per voxel.\n",
194
194
"# We further set the min_depth=0.1 since there is no surface within\n",
195
195
"# 0.1 units of any camera plane.\n",
196
-
"raysampler = NDCGridRaysampler(\n",
196
+
"raysampler = NDCMultinomialRaysampler(\n",
197
197
" image_width=render_size,\n",
198
198
" image_height=render_size,\n",
199
199
" n_pts_per_ray=150,\n",
@@ -462,7 +462,7 @@
462
462
"source": [
463
463
"## 6. Conclusion\n",
464
464
"\n",
465
-
"In this tutorial, we have shown how to optimize a 3D volumetric representation of a scene such that the renders of the volume from known viewpoints match the observed images for each viewpoint. The rendering was carried out using the PyTorch3D's volumetric renderer composed of an `NDCGridRaysampler` and an `EmissionAbsorptionRaymarcher`."
465
+
"In this tutorial, we have shown how to optimize a 3D volumetric representation of a scene such that the renders of the volume from known viewpoints match the observed images for each viewpoint. The rendering was carried out using the PyTorch3D's volumetric renderer composed of an `NDCMultinomialRaysampler` and an `EmissionAbsorptionRaymarcher`."
0 commit comments