You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First thanks for the great work and the clean repository. It has been pretty easy to use.
I have one question regarding the line 97 in https://github.com/ingra14m/Deformable-3D-Gaussians/blob/main/train.py d_xyz, d_rotation, d_scaling = deform.step(gaussians.get_xyz.detach(), time_input + ast_noise)
In the paper there is also a stop gradient operation indicated in this step. I found there isn't an explanation on this choice in both the main paper and supplementary. Why do we stop gradients from the deformation network to the gaussians?
The text was updated successfully, but these errors were encountered:
We want the position gradient of 3DGS to be clean. That is, the update of the canonical Gaussian position should only come from the RGB loss, rather than also involving the deformation field branch. Theoretically, this can separate the learning of the deformation field from the canonical Gaussian, making joint training possible.
First thanks for the great work and the clean repository. It has been pretty easy to use.
I have one question regarding the line 97 in https://github.com/ingra14m/Deformable-3D-Gaussians/blob/main/train.py
d_xyz, d_rotation, d_scaling = deform.step(gaussians.get_xyz.detach(), time_input + ast_noise)
In the paper there is also a stop gradient operation indicated in this step. I found there isn't an explanation on this choice in both the main paper and supplementary. Why do we stop gradients from the deformation network to the gaussians?
The text was updated successfully, but these errors were encountered: