You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to implement depth-based reconstruction (using processed depth maps acquired from LIDAR data) using the depth-based losses that are currently implemented through depth-nerfacto. I would like to only train using depth values, with no RGB images used during reconstruction.
However, I am required to have a file_path key for each frame in my JSON file, and depth-nerfacto needs this to run properly. I am fairly confident that the current depth-nerfacto needs RGB images to run properly.
As a clarifying question, how does depth-nerfacto compute its loss function? I tried, for instance, just having the file_path point to a blank image with the same resolution size as my depth maps and neither an RGB nor depth volume can be made. If I just have a file_path key with no depth_file_path, it will try to estimate pseudodepth using monocular depth estimation.
Does the depth-nerfacto model need RGB data in order to work properly? Would it be trivial to rewrite the loss function to not use the file_path, which I believe points to the RGB image, or would this be a completely different NeRF model?
The text was updated successfully, but these errors were encountered:
Hello:
I am trying to implement depth-based reconstruction (using processed depth maps acquired from LIDAR data) using the depth-based losses that are currently implemented through depth-nerfacto. I would like to only train using depth values, with no RGB images used during reconstruction.
However, I am required to have a file_path key for each frame in my JSON file, and depth-nerfacto needs this to run properly. I am fairly confident that the current depth-nerfacto needs RGB images to run properly.
As a clarifying question, how does depth-nerfacto compute its loss function? I tried, for instance, just having the file_path point to a blank image with the same resolution size as my depth maps and neither an RGB nor depth volume can be made. If I just have a file_path key with no depth_file_path, it will try to estimate pseudodepth using monocular depth estimation.
Does the depth-nerfacto model need RGB data in order to work properly? Would it be trivial to rewrite the loss function to not use the file_path, which I believe points to the RGB image, or would this be a completely different NeRF model?
The text was updated successfully, but these errors were encountered: