You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I noticed that the Tab.6 of the paper (arxiv version) mentioned that the photometric loss doesn't improve the performance. However, I am wondering how the rendered RGB and semantic maps look like. Would you provide some visualization results?
Thanks!
The text was updated successfully, but these errors were encountered:
Sorry for the late reply. The rendered RGB images are quite blurred which might be related to the supervision (l2 loss only) and inappropriate hyperparameters (number of Gaussians, etc).
Thank you for providing additional result about photometric loss
Btw, i am just curious about how you render RGB.
Did you just project gaussian points like vanilla gaussian splatting? like containing every gaussian points have spherical coeff or just using RGB MLP from voxel representation to predict the color?
Hi, I noticed that the Tab.6 of the paper (arxiv version) mentioned that the photometric loss doesn't improve the performance. However, I am wondering how the rendered RGB and semantic maps look like. Would you provide some visualization results?
Thanks!
The text was updated successfully, but these errors were encountered: