-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Calibration result view goes blank if just one point's calibration fails #17
Comments
Thanks for the report! Unfortunately, the solution is not viable. def compute_and_apply(self):
'''Uses the data in the temporary buffer and tries to compute calibration parameters.
If the call is successful, the data is copied from the temporary buffer to the active buffer.
If there is insufficient data to compute a new calibration or if the collected data is not
good enough then an exception will be raised.
See @ref find_all_eyetrackers or EyeTracker.__init__ on how to create an EyeTracker object.
<CodeExample>calibration.py</CodeExample>
Raises:
EyeTrackerConnectionFailedError
EyeTrackerFeatureNotSupportedError
EyeTrackerInvalidOperationError
EyeTrackerLicenseError
EyeTrackerInternalError
Returns:
A CalibrationResult object.
'''
interop_result = interop.screen_based_calibration_compute_and_apply(self.__core_eyetracker)
status = _calibration_status[interop_result[0]]
if (status != CALIBRATION_STATUS_SUCCESS):
return CalibrationResult(status, ())
# the remaining code ... The calibration result will be used by the eye tracker if the call is successful, otherwise
I agree that the if-else check of |
@datalowe I planned to clean up the code on the weekends (maint branch). |
Hi, sorry for taking so long to respond. If I do a calibration and data are captured for points 1, 2, and 4 (while points 3 and 5 fail entirely), the calibration as a whole is unsuccesful and not applied. However, if I then recalibrate points 3 and 5 successfully, I thought the calibration would be successful and apllied. Am I mistaken here? Since I thought the successful data capture for a subset of calibration points could be used together with additional later (recalibration) capture data, I figured the experimenter should be given feedback about this. Doing so would prevent unnecessarily recalibrating at points for which there are already valid data. Hence I suggested the change proposed in this issue. But of course, this is all based on my understanding that "partial" calibration data can be "filled out" by a partial recalibration procedure. I don't have access to an eyetracker now, so I can't test this unfortunately. |
You are correct about that, but that's a different situation. When we conduct calibration, there are many possible situations:
The issue you mentioned is situation 1 and the following scenario is situation 2. When As a side note,
When the user violates the assumption, we don't know how Tobii SDK handles those data. It would be too risky to assume that Tobii will keep those data for future calibration. That's why I don't want to show anything when the calibration is not ok (situation 1). |
I see, it's unfortunate that Tobii's documentation isn't very clear about this. Would you agree though that it should be safe to assume that whenever 'CALIBRATION_STATUS_SUCCESS' is returned, the calibration really has been applied? If so, it should be possible to add a separate indicator eg at the bottom of the calibration results screen that informs the user about the 'calibration status'. It could read:
Then, the flow as seen from the experimenter's point of view could go like this:
If it would turn out that Tobii does something really weird and it's possible that in step 5 there would be red/green lines going out from all points (ie data have been collected for all points) but Based on my and colleagues' (they are the ones who have the eyetracker now) testing, the eyetracker, or tobii_research's objects, do 'remember' the calibration data from a partially successful calibration. We've observed the behavior of 'additional' points having red/green lines after running a recalibration only with the 'failed' points, like in steps 4 and 5 in the example. It seems unlikely to me that the Tobii software writers would have made the eyetracker not apply a calibration even when it reports successfully captured calibration data for all points, but I might be missing something. |
At least that's what the documentation states, I believe so. The proposal sounds reasonable to me. I would see what I can do recently (I don't have access to the eye tracker at the moment. It may take some time to make arrangement.) In my experience, |
Steps to reproduce:
i. It's possible that it has to be the last point that you look away for, but I think that the issue appears regardless of which point you close your eyes for.
The calibration results view should now be entirely 'blank'.
What I expected to happen:
Rather than getting an entirely 'blank' calibration results view, I'd expect that only the one point/area for which you closed your eyes would be blank, while other calibration points would have corresponding red/green lines indicating how far participant gaze was from the target.
Cause of issue
In the package's init.py file inside of the
_show_calibration_result
method, there is the following check:This
status
attribute is assigned its value through the return value of atobii_research
function call. Tobii's documentation here is somewhat opaque, but as far as I could understand, this value is set to the constantCALIBRATION_STATUS_FAILURE
as soon as any point's calibration data collection fails (regardless of whether or not collection was successful for other points).Suggested solution
Simply remove the
self.calibration_result.status == tr.CALIBRATION_STATUS_FAILURE
check. As far as I can tell, and from testing, this shouldn't cause any issues. The_show_calibration_result
code relies onself.calibration_result.calibration_points
, which as far as I can tell only includes points for which calibration data collection was 'successful' (ie participant gaze could be captured at all). I tried this in a branch and with the modification, the results presentation behavior is as I would expect.In fact, now that I look at it again, it's probably possible to drop the
if len(self.calibration_result.calibration_points) == 0:
check as well, since if the length is indeed 0 thenfor this_point in self.calibration_result.calibration_points:
loop won't do any iterations anyway. I unfortunately can't test this myself since I don't have access to an eyetracker now, but it might be worth considering, for cleaning the code.If the solution seems appropriate, I can do a PR.
The text was updated successfully, but these errors were encountered: