Tobii Ocumen Calibration Quality Sample

The Calibration Quality sample visualizes the quality of each calibration point in the current calibration.

This data can be used to:

  • Get an estimate of the expected gaze performance of the current user
  • Visualize if the user has significantly worse performance in certain angles
  • Verify that the user looked at all the stimuli and valid data was collected

Keep in mind that this is a small dataset that only tells you the bias and precision of that user at the time when they calibrated. If you want to perform a more statistically viable calculation of bias and precision, we recommend that you perform your own data collection, see Measure Quality Yourself section below.

Table of Contents


Download

Here, you can download the prebuilt sample for your device. Source code access for the sample is provided as a part of Tobii Ocumen.

Device Download
Pico Neo 3 Pro Eye
HP Reverb G2 Omnicept Edition
The Pico Neo 2 Eye does not support the Calibration Quality API, and thus does not support this sample.

Unity Sample

In the Calibration Quality sample we have visualized the output of the functions returned by the ConfigurationManager regarding calibration quality, see more info below.

Right and left eye data is retrieved from the currently active calibration and are used to visualize bias and precision for the calibration stimuli points.

Getting Started

Make sure you follow our installation instructions for Tobii Ocumen.

Input

The scene is interactive, allowing you to select a precision standard deviation using gaze selection and pressing the Configured Button. You can also select an eye visualization to expand it into the first person point of view in the headset.

The Configured Button is any of the following:

  • Spacebar
  • Pico headset button
  • Controller trigger button

This can be changed from the ControllerManager script.

Calibration Quality API

Point is the stimulus position used for the calibration and is visualized as a plus symbol.

Bias is shown as a green circle. This means that the person’s bias lies on the green circle, however the exact location of bias on the circle (or direction of bias) is not specified. The exact location is not reported by our calibration algorithm, but this can be calculated by measuring calibration quality yourself.

Precision is shown as blue dots which are centered on the edge of the green circle. Higher precision results in a narrow spread of blue dots, whereas lower precision results in a wider spread. In other words, the thickness of the circular line of blue dots is equivalent to the diameter of the precision spread, for the chosen standard deviation.

Used determines if a point was used to create the calibration. Points can be excluded due to different factors such as poor quality.

Valid is true if enough valid data for this point was collected during the calibration. It can only be false if data for one eye was not collected. If no data for either eye was collected for a point, the point will not be returned by the calibration API.


Measure Quality Yourself

The API used to measure calibration quality is limited due to it being measured during a configuration (before the calibration was applied). You can make your own calculations after a configuration has been run to get more accurate data about the user’s current eye tracking quality.

For example, a person calibrates and then afterwards looks at another set of points in the same location. On the second time the user looks at the points, you can store all of the gaze points that correspond with each stimulus point. From the gaze points you can derive an exact bias, bias direction and precision, and also see outliers which may be of interest.

This can also be useful to run after a person has used the headset for a while, to compare quality before and after the person has completed a task. This approach requires the person to look at multiple sets of points, which may be tiring, but it may be worth the effort.

To get started with implementing this, the calibration process in the Unity Configuration Sample can be duplicated and modified to not actually calibrate, but instead collect the person’s gaze points at each stimulus point, and afterwards calculate bias and precision for each stimulus point.