-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
On the Calculation of the Final Pose of Registered Images #12
Comments
Hi @adrianJW421, Yours, |
@HpWang-whu After I tried it on some custom image and ply scene, the value of translation vector in the estimated Extrinsic often went to over 100, for example after visualization, the estimated camera pose and the ply looks like this: Was it purely the performance issue or is there some requirements for preprocessing the input image and ply that I have overlooked? Or is there a particular condition that the ply point cloud should follow (like a certain coordinate system)? |
Hi @adrianJW421 , Yours, |
@HpWang-whu Thanks a lot. Can you try download the badcase data from this drive link? Looking forward to hearing from you |
Excuse me, are you using your own dataset to run the demo |
Hi @adrianJW421 ,
You might re-pull the repo to have a try. The hyperparameters are introduced in the Since YOHO was trained on indoor data quite different from yours, its performance is not stable. With Yours, |
Hello!
Based on my understanding, the Tpre matrix generated by the match results is the transformation matrix that converts the extrinsics of the input point cloud (pcd) projection to the camera pose where the image is registered within the point cloud.
Could you provide a example tool code that allows the user to directly obtain the final pose of the registered image? If my understanding is incorrect, could you please explain how to accurately determine the final pose of the registered image?
Thank you!
The text was updated successfully, but these errors were encountered: