Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

training poses / code for LLFF, COLMAP-based examples? #7

Open
pwais opened this issue Dec 30, 2021 · 2 comments
Open

training poses / code for LLFF, COLMAP-based examples? #7

pwais opened this issue Dec 30, 2021 · 2 comments

Comments

@pwais
Copy link

pwais commented Dec 30, 2021

Dear Authors,

Thank you for making this work made available! I've been able to reproduce some of the DTU results but am struggling with my own data. In particular, on my own scenes, the first ray_marching call in unisurf never seems to find any intersections (all depths are 0)... I've tried scaling the poses of my scenes by several orders of magnitude (large and small), the opencv-to-opengl trick, and a few other things, but I can't seem to get unisurf to do anything but spit out a mesh of a cube with some splotches cut out the side. I tried using some of the DTU poses you have and I get a noisy cloud mesh, which is what I'd expect (since the poses and RGB are not consistent).

When I inspect the DTU poses you provide, the data seems a bit strange.... the RTs are very very large and the R component does not appear to have unit norm. The Scale factor also appears to be large. There are many ways to factor a P matrix, but the pose data nevertheless struck me as odd.

Could you please make available the poses (and perhaps code) you used in the COLMAP-based studies? COLMAP outputs poses that make a bit more sense (the R in the RT is at least a quaternion with unit norm) ... this data could help me debug what's going on I think.

@Impuuuuuu
Copy link

Impuuuuuu commented Jan 5, 2022

Dear Authors,

Thank you for making this work made available! I've been able to reproduce some of the DTU results but am struggling with my own data. In particular, on my own scenes, the first ray_marching call in unisurf never seems to find any intersections (all depths are 0)... I've tried scaling the poses of my scenes by several orders of magnitude (large and small), the opencv-to-opengl trick, and a few other things, but I can't seem to get unisurf to do anything but spit out a mesh of a cube with some splotches cut out the side. I tried using some of the DTU poses you have and I get a noisy cloud mesh, which is what I'd expect (since the poses and RGB are not consistent).

When I inspect the DTU poses you provide, the data seems a bit strange.... the RTs are very very large and the R component does not appear to have unit norm. The Scale factor also appears to be large. There are many ways to factor a P matrix, but the pose data nevertheless struck me as odd.

Could you please make available the poses (and perhaps code) you used in the COLMAP-based studies? COLMAP outputs poses that make a bit more sense (the R in the RT is at least a quaternion with unit norm) ... this data could help me debug what's going on I think.

Hi, I have met the same question before. My solution is adjusting the bias initialization of the last linear layer in the occupy network to make this network module capable of producing both postive and negtive output, which means that it always exists a surface of the object to ensure the network to converge. However, such bias only works for single object and it fails to other objects. Thus I guess the author may normalzie the data into a certain range. By the way, I am still very confused of this work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants