Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible to input a geometric proxy to help guide meshing? #402

Open
DanAndersen opened this issue Feb 20, 2019 · 9 comments
Open
Labels
do not close issue that should stay open (avoid automatically close because stale) feature request feature request from the community

Comments

@DanAndersen
Copy link

Hi,
I have a somewhat unusual question. I have acquired an input scene with ~150 photos, which have been well matched and aligned.
issue_1

Due to some thin and feature-poor regions of the scene, the meshing stage generates poor results (see the underside and edges of the table, leading to distorted geometry under the floor).

issue_2

However, I have a rough geometric model of the scene acquired using active methods (the depth sensors on an AR HMD).

issue_3

I am wondering if there is any feature in meshroom or similar software that would allow me to inject the rough geometric model as a constraint, to encourage the meshing to generate a refined model that is more physically plausible given this geometric proxy.

@fabiencastan
Copy link
Member

There is no such feature yet.

One thing that would be nice to add in Meshroom is the ability to use RGBD images in input and use the provided depth maps in the dense part of the pipeline.
There was a beginning of work in this direction to update the scene scale (to have depth values in the same coordinate system) to be able to use the provided depth maps, but that's far from being usable:
alicevision/AliceVision#489

Best,

@DanAndersen
Copy link
Author

Thank you for getting back to me so quickly; is the potential future feature (to add RGBD images as input) connected with what is described in issue #399?

@fabiencastan
Copy link
Member

Yes, it's related. The only difference is that if the depth maps are rendered (like in Issue 399) the scale of the scene will the same between the rendered depth maps and the reconstructed cameras.
If the depth maps comes from RGBD sensors and the reconstructed cameras are in a random coordinate system, we need to adjust the scale between them. That should be the main challenge.

@hargrovecompany
Copy link

I think you are way short of the number of photos that you really need to recreate that environment.

@simogasp simogasp mentioned this issue Aug 17, 2019
@natowi natowi added feature request feature request from the community do not close issue that should stay open (avoid automatically close because stale) and removed type:enhancement labels Oct 27, 2019
@stale
Copy link

stale bot commented Feb 24, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale for issues that becomes stale (no solution) label Feb 24, 2020
@natowi natowi removed the stale for issues that becomes stale (no solution) label Feb 24, 2020
@Sudoki
Copy link

Sudoki commented Mar 28, 2020

is this feature still being considered?

@fabiencastan
Copy link
Member

AFAIK, there is no active development on this topic, but it is an interesting subject and we would be happy to support any initiative in this area.

@kenchikuliu
Copy link

Hi,
I have a somewhat unusual question. I have acquired an input scene with ~150 photos, which have been well matched and aligned.
issue_1

Due to some thin and feature-poor regions of the scene, the meshing stage generates poor results (see the underside and edges of the table, leading to distorted geometry under the floor).

issue_2

However, I have a rough geometric model of the scene acquired using active methods (the depth sensors on an AR HMD).

issue_3

I am wondering if there is any feature in meshroom or similar software that would allow me to inject the rough geometric model as a constraint, to encourage the meshing to generate a refined model that is more physically plausible given this geometric proxy.

Hi, great topic. I also consider this, how about now? Is there any feature in meshroom or similar software that would allow me to inject the rough geometric model as a constraint, to encourage the meshing to generate a refined model that is more physically plausible given this geometric proxy?

@natowi
Copy link
Member

natowi commented Jan 28, 2021

Is there any feature in meshroom or similar software

No and not that I know of.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
do not close issue that should stay open (avoid automatically close because stale) feature request feature request from the community
Projects
None yet
Development

No branches or pull requests

6 participants