You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to implement your fisherRF in the NeRF framework.
The problem is that in NeRF, the model parameters do not directly correspond to spatial locations.
So, to visualize the pixel-wise uncertainty, ChatGPT advised that I render and get gradients of every pixel (ray) one by one.
However, how can I render the computed uncertainty? It is hard to volume-render the gradients of model parameters.
Can I just sum up the pixel-wise uncertainties?
The text was updated successfully, but these errors were encountered:
Hello! Thanks for your amazing work!
I'm trying to implement your fisherRF in the NeRF framework.
The problem is that in NeRF, the model parameters do not directly correspond to spatial locations.
So, to visualize the pixel-wise uncertainty, ChatGPT advised that I render and get gradients of every pixel (ray) one by one.
However, how can I render the computed uncertainty? It is hard to volume-render the gradients of model parameters.
Can I just sum up the pixel-wise uncertainties?
The text was updated successfully, but these errors were encountered: