-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
training poses / code for LLFF, COLMAP-based examples? #7
Comments
Hi, I have met the same question before. My solution is adjusting the bias initialization of the last linear layer in the occupy network to make this network module capable of producing both postive and negtive output, which means that it always exists a surface of the object to ensure the network to converge. However, such bias only works for single object and it fails to other objects. Thus I guess the author may normalzie the data into a certain range. By the way, I am still very confused of this work. |
Some resources I found useful regarding the provided camera poses: |
Dear Authors,
Thank you for making this work made available! I've been able to reproduce some of the DTU results but am struggling with my own data. In particular, on my own scenes, the first
ray_marching
call inunisurf
never seems to find any intersections (all depths are 0)... I've tried scaling the poses of my scenes by several orders of magnitude (large and small), the opencv-to-opengl trick, and a few other things, but I can't seem to get unisurf to do anything but spit out a mesh of a cube with some splotches cut out the side. I tried using some of the DTU poses you have and I get a noisy cloud mesh, which is what I'd expect (since the poses and RGB are not consistent).When I inspect the DTU poses you provide, the data seems a bit strange.... the RTs are very very large and the R component does not appear to have unit norm. The Scale factor also appears to be large. There are many ways to factor a P matrix, but the pose data nevertheless struck me as odd.
Could you please make available the poses (and perhaps code) you used in the COLMAP-based studies? COLMAP outputs poses that make a bit more sense (the R in the RT is at least a quaternion with unit norm) ... this data could help me debug what's going on I think.
The text was updated successfully, but these errors were encountered: