-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hand on base update #2
Comments
Unfortunately, I do not have the free time currently to implement hand on base. I may get to it sometime in the future, but for now it is highly unlikely. Implementing this should be relatively straightforward though. Much of the code for eye on hand can be used as a starting point and the only real things that change are the provided transforms. If you decide to implement this, please feel free to submit a pull request. |
Simply changing the transforms that are inputted won't be enough. The calibration code itself will have to be modified to account for hand on base. |
hello Sir, |
Hi @kinoies, Just to make sure, when you are saying eye-to-hand, do you mean eye-on-base calibration? |
Yes, it is eye on base, which mounts stationary camera. I simply inputting my eye-to-hand transforms, and rewrite the non-linear optimization function based on current code. |
Hi @kinoies, I wouldn't even worry about the nonlinear optimization for now. Also, it's been a long time since I've looked at the code but if I remember correctly, computing eye-on-base calibration cannot be done by just replacing the input transforms and using the same code (otherwise I would have most likely implemented this). There are some other major updates that are required to get it working. You could try to take a stab at it and submit a pull request but firstly, I'd highly recommend reading the original paper here to get a better idea of what to do. |
Hi @QuantuMope I have read the paper and think i've got a basic understanding. In parallel i studied the code in calibrator.py and followed it along. I have tried to simply change the input and skip the nonlinear optimization but the calibration is always a bit off. I would really appreciate any help because i tried to make some changes but am stuck and lost :) |
Hi @zaksel, it's been quite a while since I've looked at this code, but from glancing at the paper again, I think you are correct. My code should (hopefully haha) work by just changing the transform inputs. Also, @kinoies, this may answer your question as well. Sorry about the very late reply. Currently for eye-in-hand, the TF chain looks like this:
The program expects a list of N Now if we wanted to solve for eye-on-base, the TF chain should look like this (this is what is listed in the paper):
So then, to obtain Hope this helps. Let me know if this works or if this is something you've already tried. |
Hi, thank you very much for your fast and dedicated response! Thats exactly what i thought should work. So i tried to feed the inverse of base_to_hand ( I have to check again on Monday and will keep you posted |
Hi, it's me again... I have run a few experiments and had to change a few things:
As part of my Problem i noticed we maybe have a very different understanding of naming conventions: I will attach my Pose-Pairs here for anyone whos interested: @QuantuMope It is a great implementation of really hard math. I really appreciate it 👍 Edited: I renamed the file cam2target.txt to target2cam.txt because it contains transformations from marker to camera so the name was misleading. |
Awesome, thank you @zaksel From your rough estimates, does the resultant transform seem reasonable to you? The changes you had to do are interesting... I will try to investigate this issue soon so I can provide clear instructions in the README. Thanks again! |
Transform seems reasonable and today we had the chance to verify our calibration. It still has errors in the range of ɛ < 2mm. But it is a starting point. So maybe I should try to get the non linear refinement working. I tried to simply enable it with flipped inputs and the error shoot up. Do you have any other ideas on how to minimize errors? |
Hi, Could you please share your code? Here is the code:
And here is the final printed pose:
Note: I have checked that the result of |
Oh, you are right. I made a mistake in the files i uploaded. cam2target does not contain Transformations from camera to the marker but vice versa. So it should be named target2cam.txt (I did rename the file in my comment above) and you have to invert those transformations before feeding them to the Calibrator4DOF. In my case i also had to increase the sv_limit to 0.6 And the Translation in z-Direction is calculated from depth values and overwritten so it is different from the result you will get. |
Thank you @zaksel
Could you please elaborate on what you did for |
@AliBuildsAI if you are using an RGBD-Camera or get the distance between camera and markers from your detection (e.g. cv2.aruco.estimatePoseCharucoBoard). You can save that information and calculate a distance between robot-base-frame and camera-frame with Hope that was understandable ;) This is working for me BUT the authors are proposing a different approach in the Paper see Section III. E. Determination of the Real tz |
My apologies for letting this issue go dead again back in November @zaksel. I was finishing up my dissertation that month so it was quite a hectic time for me. Thank you to both @AliBuildsAI and @zaksel for this discussion ^_^. The fact that you both are getting a reasonable calibration result means that the framework in its current state is suitable for eye-on-base calibration, but the current need for switching the inputs to the pose selector made me a bit wary. So I decided to get to the bottom of why that is and this is what I found. @zaksel is correct that my naming convention was incorrect in my above explanation. My above explanation became a bit unclear as I forgot the meaning for the notation that the original paper uses. In that paper, they denote a transformation Now using the left-to-right notation instead, I will attempt to explain what the current caliborator example is doing as well as the example that you both have done for eye-on-base. Currently, for eye-in-hand, the code requires a set of
and through this, we obtain the desired TF Now, we can go back to eye-on-base. For this, using the left-to-right notation, the TF chain is shown below:
and through this, we obtain the desired TF This now explains why
Anyways, let me know if this checks out with the both of you. I appreciate both of your efforts in getting to the bottom of this. I incorrectly assumed that more work would be needed in achieving eye-on-base functionality, but fortunately it seems that functionality can be achieved with a simple change in inputs! 🥳🥳🥳 @zaksel if you are willing, a PR would be greatly appreciated in explaining eye-on-base functionality in the |
Hi,
Thanks for the greate work, and is there any update for hand on base
The text was updated successfully, but these errors were encountered: