Tobii Ocumen FAQ

Tobii Ocumen - Frequently Asked Questions.

If you can’t find an answer here, you can contact us via our contact form.

Table of Contents


General

Questions about Tobii Ocumen in general.

Where can I get Tobii Ocumen?

Tobii Ocumen is an offering on top of our Tobii XR Core SDK and available separately. To get access, please contact us via our contact form.

What hardware is Tobii Ocumen compatible with?

Currently, Tobii Ocumen is supported on the Pico Neo 2 Eye headset.

What eye tracking capabilities does Tobii Ocumen give access to?

Tobii Ocumen gives you access to the Advanced API and a data pipeline to utilize eye movement classifiers and filters.

For more detailed information, check out our Tobii Ocumen solutions page.

What is the current readiness of Tobii Ocumen?

Tobii Ocumen consists of several components which, at the time of writing (October 2020), differ in their state of readiness:

Component Version
Advanced API Release Candidate
Ocumen Runtime Alpha
Ocumen CLI Alpha

The respective readiness levels mean:

  • Released - The components have been tested by multiple customers, and after prolonged use no issues were reported. We will not change APIs or their behavior unless external reasons (e.g., updates to Unity or a new eye tracker generation) dictate so. Newer versions containing bug fixes and improvements will be provided, and should require minimal churn.
  • Release Candidate - The APIs have received extensive testing and are used by multiple customers. We do not expect the given APIs to change, unless bugs are encountered during the testing period.
  • Beta - Based on internal usage we are satisfied with the API behavior and have no immediate plans to change their surface or behavior. However, we are rather early in that process and would expect isolated issues to arise.
  • Alpha - These components see heavy development and design changes. We offer them because we are convinced they provide value to some customers, but only recommend their use in early or experimental development phases.

How can I get feature X?

If something is missing we would be more than happy to hear about it, please use our contact form.


Filter & Pipelines

Questions related to algorithms and filters.

What is a filter and what are filter pipelines?

Filters address “scientific concerns” based on eye tracking and sensor data and perform “event detection”. Examples of filters include:

  • Fixation filters (Tobii I-VT, Generic Dispersion Windows, …)
  • Saccade filters (Smeets & Hooge 2003, …)
  • Saccadic Intrusion filters

Each filter is a specific implementation with certain trade-offs for various use cases. Based on a use case (e.g., reading experiments, road side drunkenness tests, …) a researcher or experimenter would select an appropriate filter for the given task.

A filter pipeline is a technical construct used by Tobii Ocumen to integrate necessary data-processing and sensor fusion stages with a filter for optimal performance. An example pipeline might look like this:

{
    "eyetracking_processors": [
        { "Timeshift": { "shift_by": "30 ms" } },
        { "InvalidateNearBlink": { "before": "-20 ms", "after": "20 ms", "min_blink_time": "50 ms" }}
    ],
    "fusion": { "TakeEyeTrackingClosestFrame": {} },
    "algorithms": {
        "ivt": {
            "FixationTobiiIVT": {
                "window_config": {
                    "window_type": {
                        "aggregate_times": {
                            "time_before": "-40 ms",
                            "time_after": "40 ms",
                            "aggregation": "average"
                        }
                    },
                    "padding": "ignore"
                },
                "detect_fixation_if_angle_speed_below_angle_per_sec": "3.0 deg",
                "discard_fixations_below": "60 ms",
                "merge_adjacent_when_gap_below": "100 ms",
                "merge_adjacent_when_angle_below": "0.5 deg"
            }
        },
    }
}

Such a pipeline describes sensor fusion and data processing should be performed so that for a given algorithm (an I-VT fixation detector in this case) its output is maximally useful for the experimental conditions in question.

What filters are available?

To see a definitive list of what filters are available in your version of Tobii Ocumen please run:

ocumen_cli pipeline document -f available_filters.md

How can I verify my pipeline works?

If you want to avoid re-compiling and re-deploying your application while working with filters and pipelines you can invoke:

ocumen_cli pipeline check  --pipeline pipeline.json5

Most importantly this will output something like:

Algorithm 'ivt':
- left.is_fixation (bool)
- right.is_fixation (bool)

Pipeline could be parsed.............YES
Pipeline could be run on test data...YES

The Algorithms section will list all available algorithms (here ivt) and the values and types that will be obtainable. It will also check if the pipeline was syntactically correct, and whether it succeeded producing results on relatively normal eye tracking data.

If it fails on Pipeline could be run on test data this does not necessarily mean anything, since for your experiment and your data the results might be different.

What is a Simple filter?

Sometimes you just need something to work real quick. Most filter classes have a simple filter. That filter should a good, general purpose choice but makes no particular promises about its behavior and can’t be configured.

These are good for hacking, but should be avoided in mature, scientific products for obvious reasons.

How can I get or use “combined” eye tracking data?

Currently Tobii Ocumen does not support the general concept of “combined gaze data” as any such a combination is an opinionated abstraction that is hard to handle in a general way for all filters.

With that said, individual filters handle (and may expose) combined data where appropriate. Also, you are free to combine the individual left and right eye data yourself as you see fit.

Why does my left and right eye algorithm output differ?

There are three aspects to that question:

  • Most eye movements should be symmetrical most of the time to the largest degree.
  • In some situations they are not (e.g., during vergence or saccadic intrusions).
  • There might be measurement noise.

Generally speaking one would expect to measure a fixation on both eyes at the same time. If a difference is measured this might either be caused by the experimental setup, or because a particularly sensitive algorithm or setting was chosen that reacted to small variations in measurement noise.

For example, it is not uncommon to observe a difference in left- and right-eye fixation onsets by 20+ milliseconds (equaling 1-2 frames) when using an I-VT filter, if noise was present in the raw data and settings sensitive to these noise patterns were chosen.

How much data should I pass into a pipeline, and what results should I query?

A pipeline processes the defined algorithms with the provided data. The inputs are sensor measurements, the output are, mostly, events where the ith event corresponds to the best interpretation at the time of the ith fusion of the provided inputs.

For some algorithms output i only depends directly on fused event input i. In that case, arbitrarily short or long sensor data can be provided, and the output for each i is meaningful.

Other algorithms operate on a window over input data, for example from -100ms .. 100ms. In that case at least 200ms would have to be provided, and only the event right in the middle would be meaningful. Similar considerations apply to eye movement filters which, for example, might need some time to interpret whether a movement constitutes a saccade or something else.

This means there is no universal answer to the given question, but the documentation and filter settings for a given pipeline have to be consulted.

What is validity, how should I use it, what does validity = false mean?

When querying for output, validity values will also be returned, e.g., a float[5] array would be associated with bool[5] validity flags. These validity flags indicate whether the given value was written to, and is therefore meaningful.

Validity true means:

  • The corresponding value was written to.
  • The value means what it says.

Validity false means:

  • The corresponding value might have not been written to.
  • In any case the value is unspecified and meaningless.

Validity flags are usually false when the corresponding data could not be computed, e.g., some eye velocity between two frames where no eyes were present.


SDK

Questions related to the SDK.

Why can’t I choose a provider when I have enabled the Advanced API?

When the Advanced API is enabled, you don’t need any third party SDKs because the Advanced API uses Stream Engine, which is built in to the chipset of the eye tracker.