Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Way to train just with depth maps? Ignore file_path flag if just using depth data? #3589

Open
aidris823 opened this issue Feb 8, 2025 · 0 comments

Comments

@aidris823
Copy link

Hello:

I am trying to implement depth-based reconstruction (using processed depth maps acquired from LIDAR data) using the depth-based losses that are currently implemented through depth-nerfacto. I would like to only train using depth values, with no RGB images used during reconstruction.

However, I am required to have a file_path key for each frame in my JSON file, and depth-nerfacto needs this to run properly. I am fairly confident that the current depth-nerfacto needs RGB images to run properly.

As a clarifying question, how does depth-nerfacto compute its loss function? I tried, for instance, just having the file_path point to a blank image with the same resolution size as my depth maps and neither an RGB nor depth volume can be made. If I just have a file_path key with no depth_file_path, it will try to estimate pseudodepth using monocular depth estimation.

Does the depth-nerfacto model need RGB data in order to work properly? Would it be trivial to rewrite the loss function to not use the file_path, which I believe points to the RGB image, or would this be a completely different NeRF model?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant