-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inference on custom data? #5
Comments
That's indeed a nice question, the code now doesn't support unposed input or custom format. Here is a loader example, the c2ws is in opencv format and the scene is normalized within [-0.5,0.5]. |
Yes, only the objects (instead of cameras) are normalized in [-0.5,0.5] |
BTW, after scaling the objects, please also need to align the cameras using: https://github.com/autonomousvision/LaRa/blob/main/dataLoader/gobjverse.py#L58-L66 |
Thanks for your reply. |
Here are the results of the real-world images. The primary issue appears to be the improper setting of the scene center and scene scale. There is significant room for improvement with real-world inputs, such as using real-world images to train our model. |
can you elaborate more? In my case I render images while looking at the object center. So, I think scene center should be ok, but I am no sure about the scale. |
hi, @apchenstu if I wanna use my real data, how to set the scale or scene center? |
the scene center is the object center, and the bounding box is [-0.5,0.5], so you need to scale and shift the object that is roughly bounded by the bounding box. |
I was wondering how I can feed my capture to model? Can you hint me to some code to understand better the coordinate and scaling of input camera poses.
Thanks in advance.
The text was updated successfully, but these errors were encountered: