-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement OpenCV's Hand-Eye Calibration for Comparison #912
Comments
Hi @miguelriemoliveira and @manuelgitgomes ! I want to get the transformation from the camera to the pattern based on the detections in a given collection. Before I go dive into how to do this I wanted to ask if this is already done in ATOM somewhere? |
I think I found an example of it in the atom/atom_calibration/src/atom_calibration/collect/patterns.py Lines 74 to 77 in e740af9
I think this is done by the |
…need to confirm calc of pattern_T_cam (#912)
Hello @miguelriemoliveira and @manuelgitgomes ! I think I got some progress done, but not a lot. I'm still only considering one collection instead of the whole dataset, to make testing easier. I can now calculate the transformation from the pattern to the camera here: atom/atom_evaluation/scripts/other_calibrations/cv_handeye_calib.py Lines 70 to 107 in 61c4845
Admittedly, this code is a work in progress and the variable names aren't the best they could be. Some things are also hardcoded for now for testing reasons. I'm unsure how to check whether the values this calculates are correct. How should I test this? Additionally, assuming the values in the matrix are correct, calling the
To replicate, run the command:
Maybe we should meet sometime next week, since I have some doubts about how to go about coding this. When are you available? |
Hi @Kazadhum , I am looking into this. Using this opencv page as reference: https://docs.opencv.org/4.x/d9/d0c/group__calib3d.html#ga41b1a8dd70eae371eba707d101729c36 So the transformations are: cTw (world to camera) -> this is given by a solve pnp of with the a pattern detector. This is the B in equation (1) of our paper to iros. NOTE in the iros paper we use the camera to world, not sure if this is just semantics or the inverse transform is used. gTb (base to gripper) -> this is given by the robot's direct kinematics. This is the A in our iros paper. So I guess what we need to do in a script is to translate our atom datasets to these lists of matrices (vector<[Mat]) and call the opencv function. |
I am using this (may be helpful to replicate)
This is using the riwbot atom example. |
I confirm the error above... |
So the error is because you are giving the function numpy arrays, where it needs a list of numpy arrays (at least 3, says the documentation). I will try to refactor the function to create these lists ... |
Fixed it. now it does not crash. Not sure if its returning correct results though. @Kazadhum let me know if you need more help ... |
I'm picking this back up... |
Now that this is running and there's no errors, one thing I don't understand is that we use as inputs 3 collections and we get two transformations as output:
The thing is, what collection does this output refer to? Base to World shoud be static, I think, but the Gripper to Camera transform should change with different collections, no? Maybe I'm missing something obvious... |
Hi @Kazadhum , use riwbot as the example. Here those transformations are both static, check https://github.com/lardemua/atom/tree/noetic-devel/atom_examples/riwbot#configuring-the-calibration The camera does not move w.r.t. to the gripper. Its also static. |
If you want we can talk a bit later ... |
Hello @miguelriemoliveira! I understand now, but maybe we should still talk... What time is good for you? |
18h? |
Sounds good! |
Now adding a function to save the calibrated dataset, based on what was done for the ICP Calibration: atom/atom_evaluation/scripts/other_calibrations/icp_calibration.py Lines 83 to 99 in 7306689
|
I created a new dataset for @miguelriemoliveira I can't sync the |
Why not? |
I think it's something to do with eduroam but I'm not sure |
Hum ... @manuelgitgomes do you have the same problem? |
I suppose this is it:
you confirm? |
Hello @miguelriemoliveira! I just got a new laptop and I'm still setting it up so I can't confirm it, but I believe I didn't use rosrun and just ran it as a script. It should work the same though. I think the arguments are all correct. Just to check, I think I did:
|
better my way because you don't have to be in the directory of the script. |
Hi @Kazadhum , finally I was able to make this work. This was a challenge.
|
Here's how to use
|
@Kazadhum , I am done with this. Take a look and see if you agree. Things for you:
|
Ok, thank you @miguelriemoliveira! I will take a look at the code on the train to aveiro and give you feedback. On Monday I'll create the eye-to-base script and keep working on #939 |
Hi @miguelriemoliveira! I took a look at the script and everything seems alright to me (much cleaner than what I had written). I also tested it and it worked just fine. So I think I can close this issue. I already found somethings that I'm going to have to change in the eye-to-hand script, so we're definitely going to have two scripts, like we discussed. Thank you for your help! I'm now going to start working on the eye-to-hand case, based on this one (on #943). I'm also going to delete my old script. |
Reopening to fix the issue found in #943... In the eye-in-hand case, the estimated Running: returns:
The |
In:
isn't this value supposed to be in If so, then these errors make sense |
Hi @miguelriemoliveira! If you agree with this change, this can be closed again, because the eye-in-hand case is working okay I think. |
So the problem from yesterday is not visible in the hand in eye ... that's good news I think. |
Hello @miguelriemoliveira and @manuelgitgomes! In my last e-mail, I said the implementation of noise in the OpenCV calibration scripts was one of my next tasks. Today, I tested its implementation and registered that the results with noise and without noise were the exact same. This also happened for larger values of noise. Having noted this, I looked at the code and reached the conclusion that the implementation of noise as is done in ATOM doesn't make sense for the OpenCV methods. Firstly, ATOM's optimization is an iterative method, whereas OpenCV doesn't perform an optimization. The noise we add to a dataset to test ATOM's calibration is added to the initial guess. Considering the difference in methods, I think there isn't really a good way to introduce noise to the OpenCV calibration in an appropriate way to compare with ATOM. Secondly, there's how OpenCV's method computes the atom/atom_evaluation/scripts/other_calibrations/cv_eye_to_hand.py Lines 232 to 246 in d95480f
The other is calculated by forward kinematics. However, for the eye-to-hand case, the transformation chain from the hand link to the base link isn't affected by the noise: atom/atom_evaluation/scripts/other_calibrations/cv_eye_to_hand.py Lines 282 to 289 in d95480f
With this in mind, I don't really think it makes sense to implement this. Do you agree? |
Hello! I'm trying to "ATOM-ize" OpenCV's Hand-Eye calibration so we can compare our results with theirs. This issue is related to #891.
I've seen people implement OpenCV's
calibrateHandEye
method [1] [2] but I don't think this is the same as our method and, as such, is not comparable.In this method, the problem is formulated as$AX = XB$ , and the only transformation estimated is the transformation between the camera and the gripper.
I think what we want to use is the$AX=ZB$ problem is addressed.
calibrateRobotWorldHandEye()
method, where theI'm currently trying to get from ATOM's dataset the necessary transformations to perform the calibrations (right now I'm going to try getting the transform from the camera to the calibration pattern).
The text was updated successfully, but these errors were encountered: