Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Re-using Dust3r output with new images #98

Open
relh opened this issue Apr 29, 2024 · 9 comments
Open

Re-using Dust3r output with new images #98

relh opened this issue Apr 29, 2024 · 9 comments

Comments

@relh
Copy link

relh commented Apr 29, 2024

Hi all!

I have a setup where I run dust3r on a few images, then I want to add images to this and run it again.

I've been following the issues #54 , #30 , #17 . And using this ModularPointCloudOptimizer: 4a414b6

I'm wondering if there's anything else that can be preset in a scenario like this. Basically, re-using existing depth_maps with _set_depthmap (doesn't seem to save time, maybe I need to disable grad on the set depth_map?).

I haven't been able to get the normal PointCloudOptimizer with compute_global_alignment(init='known_poses' to work either. Each time I try I get an error about the requires_grad when running scene.preset_principal_point.

I'm mostly just opening this issue in-case you guys can think of a way to incorporate new images into an existing Dust3r scene efficiently, as this is my use case.

Thanks so much for making such an incredible project! Looking forward to Mast3r too :).

@relh
Copy link
Author

relh commented Apr 29, 2024

Also it's interesting that going from 2 images with the PairViewer and 3 images with the ModularPointCloudOptimizer the scale of the reconstruction often changes quite dramatically! Easy to fix if we track scale and initial pose but just something I noticed~

@hturki
Copy link
Contributor

hturki commented May 8, 2024

Re: "Each time I try I get an error about the requires_grad when running scene.preset_principal_point"

Unintuitively, I think that you need to initialize global_aligner with the "optimize_pp=True" option beforehand

@relh
Copy link
Author

relh commented May 8, 2024

This is really helpful! I've gotten further than before.

> /home/relh/Code/???????????/dust3r/dust3r/cloud_opt/init_im_poses.py(61)init_from_known_poses()
     60         assert known_poses_msk[n]
---> 61         _, i_j, scale = best_depthmaps[n]
     62         depth = self.pred_i[i_j][:, :, 2]

ipdb> print(n)
0
ipdb> best_depthmaps
{1: (4.287663459777832, '1_0', tensor(0.7287, device='cuda:0')), 2: (2.592036485671997, '2_1', tensor(0., device='cuda:0'))}

I've got here and if I get further will update!

@ljjTYJR
Copy link

ljjTYJR commented May 26, 2024

This is really helpful! I've gotten further than before.

> /home/relh/Code/???????????/dust3r/dust3r/cloud_opt/init_im_poses.py(61)init_from_known_poses()
     60         assert known_poses_msk[n]
---> 61         _, i_j, scale = best_depthmaps[n]
     62         depth = self.pred_i[i_j][:, :, 2]

ipdb> print(n)
0
ipdb> best_depthmaps
{1: (4.287663459777832, '1_0', tensor(0.7287, device='cuda:0')), 2: (2.592036485671997, '2_1', tensor(0., device='cuda:0'))}

I've got here and if I get further will update!

For registering the new image, I ran into the problem that the estimated scale is sensitive to the noise. I guess it is due to the procrustes problem is not robust, do you have some idea on it?

@LongHZ140516
Copy link

LongHZ140516 commented Jun 13, 2024

This is really helpful! I've gotten further than before.这真的很有帮助!我比以前更进一步了。

> /home/relh/Code/???????????/dust3r/dust3r/cloud_opt/init_im_poses.py(61)init_from_known_poses()
     60         assert known_poses_msk[n]
---> 61         _, i_j, scale = best_depthmaps[n]
     62         depth = self.pred_i[i_j][:, :, 2]

ipdb> print(n)
0
ipdb> best_depthmaps
{1: (4.287663459777832, '1_0', tensor(0.7287, device='cuda:0')), 2: (2.592036485671997, '2_1', tensor(0., device='cuda:0'))}

I've got here and if I get further will update!我已经到这里了,如果我有进一步的更新!

hi @relh
I also encountered the above problem and solved it with the method hturki mentioned. But the point cloud result I got after preset_poses and preset_intrinsics seems to have problems (my photo was obtained by rotating 360 degrees around the center, and the pose was calculated and set by myself). I wonder if you have encountered this problem? If so, how can I solve it?
1

Part of the depth estimation part of the process looks like this. It seems that there is no problem. I am very confused as to why there is a problem with the final result.
2

@MagicTZ
Copy link

MagicTZ commented Aug 17, 2024

Also it's interesting that going from 2 images with the PairViewer and 3 images with the ModularPointCloudOptimizer the scale of the reconstruction often changes quite dramatically! Easy to fix if we track scale and initial pose but just something I noticed~

I have observed this as well. May I ask what should be done to fix this scale issue? If I keep using PairViewer for an incremental reconstruction and pose estimation. Would you mind giving an example? Thank u~

@tavisshore
Copy link

I'm experiencing quite a sizable increase in optimisation time when reconstructing with new images.
For example,
I first get an output from dust3r for 6 images - storing poses etc.
When adding a new image to the set and running through dust3r again with the modular optimiser - it take about 50% longer that with no presets. Is this standard?

@relh
Copy link
Author

relh commented Aug 22, 2024

I switched to mast3r which uses a cache and it seems to have sped up things when using more than 2 images

@tavisshore
Copy link

tavisshore commented Aug 22, 2024

I switched to mast3r which uses a cache and it seems to have sped up things when using more than 2 images

I've tried mast3r too when adding new images to a scene with presets but it's still slower than just running the whole scene without any presets. Are you adding new images to a scene with known poses?

Edit: If I increase the LR, I can reduce the reconstruction iterations which seems to take the same time to execute as unintialised optimisation, but with better accuracy in the end.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants