Signals & Filters - Filter Documentation

Ocumen Runtime Version 0.2.2926

This document gives an overview how eye tracking data pipelines can be built. A new pipeline definition can, for example, be created by running:

ocumen_cli pipeline init

This will create a new JSON5 pipeline definition file. Within that file the options presented below are available. To see whether a pipeline you built is valid you can run:

ocumen_cli pipeline check --pipeline your_pipeline.json5

The following notes apply to all pipeline configurations:

  • You must ensure, either by feeding appropriate data, or by applying the right processing steps, that all filters only recived time-sorted data in ascending order. The only exception are filters explicitly marked as compatible with unsorted data.

  • A valid pipeline, as verified with ocumen_cli pipeline info, is guaranteed to load in an Ocumen Runtime of the same version. However, a valid pipeline is not necessarily guaranteed to run unconditionally, as it may, for example, be configured to reject data containing gaps.

  • Algorithm outputs and time stamps of inter-sample events apply to the previous time interval, unless otherwise noted. For example, assume a pipeline is fed with data for 3 events t1, t2 and t3 and asked to compute 2 algorithms, eye_openess and velocities. The output for eye openness at time t2 would equal the measured or interpolated value for openness at input time t2, while the velocity reported at t2 would be the one between inferred between the measurement points t1 and t2.

Further reading:

Overview

Frame Processors

Eye Tracking Processors

Fusion Stages

Fused Post-Processors

Velocity Computations

Fixations

Pupil Measures

Saccade Detectors

Spectral

Filters marked i are internal and only provided for preview purposes, filters marked b are in beta.

Frame Processors

Frame pre-processing is applied to head set data only (e.g, position, rotation, translation of user’s head).

FailPipelineDeltaTimeAbove

Fails the pipeline if the time between two adjacent samples is larger than the given threshold.

This processing stage can be used as a quality assurance inside a pipeline. When set with a reasonable cutoff time (e.g., 1s) it can be used to discard sessions where the stream of sensor data was disrupted.

Notes

  • This filter may be applied to unsorted data.

Sample Config

{
  "FailPipelineDeltaTimeAbove": {
    "time": "1 s"
  }
}

Configuration Options

  • time - The time in µs above which to fail.

FailPipelineDeltaTimeBelow

Fails the pipeline if the time between two adjacent samples is lower than the given threshold.

This filter is particularly useful for sorting out sessions where sensor data was not monotonic. As such, it should be used at the end of the processing pipeline.

Note: If this were the last processing stage before fusion and would fail if set to 0, it means the sensor data was not properly ordered. If in such a situation this filter were removed all subsequent algorithms would yield undefined (i.e., wrong) results.

In other words, we strongly recommend to keep this filter as a last step in any processing pipeline to ensure all subsequent processing steps operate on valid data.

Notes

  • This filter may be applied to unsorted data.

Sample Config

{
  "FailPipelineDeltaTimeBelow": {
    "time": "1 us"
  }
}

Configuration Options

  • time - The time in µs below which to fail.

SortByTime

Sorts the data by time in ascending order.

This filter can be used if the time stamps are known to be correct, but the data might have been unordered (e.g., when different threads collect and append sensor data to the same queue). In that case the data can be re-sorted before processing, according to their time stamps.

Notes

  • This filter may be applied to unsorted data.

Sample Config

{
  "SortByTime": {}
}

Timeshift

Shifts all times by a given amount.

This filter is particularly useful for frame data, where head set pose data is usually only available for the time when the current frame will be rendered, which might be some 30ms in the future.

Notes

  • This filter may be applied to unsorted data.

Sample Config

{
  "Timeshift": {
    "shift_by": "0 s"
  }
}

Configuration Options

  • shift_by - The delta time to add to each timestamp (can also be negative).

ForceIncreasingTimestamps

Enforces that all timestamps are strictly increasing.

If non-strictly monotonic timestamps are found their value will be forced to be monotonic. This is done by setting them to a value larger than the previous timestamp, based on the given configuration. Fails the pipeline if the time between two adjacent samples is lower than the given threshold.

This stage can be used if sensor data fails to time sync properly but is known to be ordered. In that case new time stamps are forced onto the data so subsequent stages (e.g., fusion) can still attempt to merge the data sensibly.

Notes

  • This filter may be applied to unsorted data.

Sample Config

{
  "ForceIncreasingTimestamps": {
    "min_delta": "1 us"
  }
}

Configuration Options

  • min_delta - The minimum delta two timestamps must have.

Eye-Tracking Processors

Eye-tracking pre-processing is applied to the (left/right) gaze signals provided by the eye tracker only.

AddNoise

Adds artificial noise to eye tracking data.

This processor will add normal-distributed noise to eye tracking data with a mean of 0 and a standard deviation that is approximately in line with real-world eye tracking data.

This filter is useful for testing purposes to stress algorithms and computations under less-than-ideal conditions.

Notes

  • This filter may be applied to unsorted data.

Sample Config

{
  "AddNoise": {
    "seed": 1,
    "relative_magnitude": 1.0
  }
}

Configuration Options

  • relative_magnitude - The relative magnitude of noise, with 0.0 being no noise, and 1.0 being reasonable noise one would commonly observe in eye tracking data; values larger than that simulate excess noise.
  • seed - The seed to use for predictable results. If set to 0 a random seed will be generated instead. The seed is applied per pipeline execution, this subsequent executions with the same data should yield the same results.

FailPipelineDeltaTimeAbove

Fails the pipeline if the time between two adjacent samples is larger than the given threshold.

This processing stage can be used as a quality assurance inside a pipeline. When set with a reasonable cutoff time (e.g., 1s) it can be used to discard sessions where the stream of sensor data was disrupted.

Notes

  • This filter may be applied to unsorted data.

Sample Config

{
  "FailPipelineDeltaTimeAbove": {
    "time": "1 s"
  }
}

Configuration Options

  • time - The time in µs above which to fail.

FailPipelineDeltaTimeBelow

Fails the pipeline if the time between two adjacent samples is lower than the given threshold.

This filter is particularly useful for sorting out sessions where sensor data was not monotonic. As such, it should be used at the end of the processing pipeline.

Note: If this were the last processing stage before fusion and would fail if set to 0, it means the sensor data was not properly ordered. If in such a situation this filter were removed all subsequent algorithms would yield undefined (i.e., wrong) results.

In other words, we strongly recommend to keep this filter as a last step in any processing pipeline to ensure all subsequent processing steps operate on valid data.

Notes

  • This filter may be applied to unsorted data.

Sample Config

{
  "FailPipelineDeltaTimeBelow": {
    "time": "1 us"
  }
}

Configuration Options

  • time - The time in µs below which to fail.

SortByTime

Sorts the data by time in ascending order.

This filter can be used if the time stamps are known to be correct, but the data might have been unordered (e.g., when different threads collect and append sensor data to the same queue). In that case the data can be re-sorted before processing, according to their time stamps.

Notes

  • This filter may be applied to unsorted data.

Sample Config

{
  "SortByTime": {}
}

Timeshift

Shifts all times by a given amount.

This filter is particularly useful for frame data, where head set pose data is usually only available for the time when the current frame will be rendered, which might be some 30ms in the future.

Notes

  • This filter may be applied to unsorted data.

Sample Config

{
  "Timeshift": {
    "shift_by": "0 s"
  }
}

Configuration Options

  • shift_by - The delta time to add to each timestamp (can also be negative).

Invalidates sample before and after a blink

Due to various reasons eye tracking data around blinks can exhibit what can best be described as “dipping”, a measurement of vertical movement. Since this dipping behavior can interfere with some algorithms this filter allows you to invalidate eye tracking samples around blinks more aggressively so that dipping won’t be detected.

Notes

  • This filter must not be used on data with non-monotonic time stamps.

Sample Config

{
  "InvalidateNearBlink": {
    "before": "-20 ms",
    "after": "20 ms",
    "min_blink_time": "50 ms"
  }
}

Configuration Options

  • after - How many microseconds after blink to invalidate gaze data.
  • before - How many microseconds before blink to invalidate gaze data.
  • min_blink_time - Minimum time for a blink.

InvalidateEyes

Indiscriminately invalidates one or both eyes.

This processor sets all fields that have validity flags to invalid and simulates as if the eye cannot be found.

Notes

  • This filter may be applied to unsorted data.

Sample Config

{
  "InvalidateEyes": {
    "eye": "Both"
  }
}

Configuration Options

  • eye - The eye(s) to invalidate, can be left, right or both.

ForceIncreasingTimestamps

Enforces that all timestamps are strictly increasing.

If non-strictly monotonic timestamps are found their value will be forced to be monotonic. This is done by setting them to a value larger than the previous timestamp, based on the given configuration. Fails the pipeline if the time between two adjacent samples is lower than the given threshold.

This stage can be used if sensor data fails to time sync properly but is known to be ordered. In that case new time stamps are forced onto the data so subsequent stages (e.g., fusion) can still attempt to merge the data sensibly.

Notes

  • This filter may be applied to unsorted data.

Sample Config

{
  "ForceIncreasingTimestamps": {
    "min_delta": "1 us"
  }
}

Configuration Options

  • min_delta - The minimum delta two timestamps must have.

CatmullRomResampling

Samples gaze angles by interpolation using a Catmull-Rom spline.

This processor will create a continuous Catmull-Rom spline over all samples and then sample the spline with the configured time interval.

The resulting data will have gaze angles interpolated, while all other attributes (e.g., pupil size) are taken unchanged from the previous low-frequency sample.

Using this processor you can achieve higher resolution on eye tracking data and thereby getting higher precision on events produced by filters.

Since this processor will ignore invalid samples when creating the spline, it is highly recommended preceding this processor with a processor that handles invalid data.

Notes

  • This filter must not be used on data with non-monotonic time stamps.

Sample Config

{
  "CatmullRomResampling": {
    "output_interval": "4 ms"
  }
}

Configuration Options

  • output_interval - The output rate of interpolated samples. Output rate = 1 / output interval.

References

Catmull, Edwin; Rom, Raphael (1974). A class of local interpolating splines. In Barnhill, Robert E.; Riesenfeld, Richard F. (eds.). Computer Aided Geometric Design. pp. 317–326.

10.1016/B978-0-12-079050-0.50020-5


SavitzkyGolaySmoothing

Samples gaze angles by interpolation using fitted Savitzky-Golay polynomial functions.

This processor will create a polynomial function for every sample passed to this processor where it can fit a window and output sample this polynomial with the system_timestamp from the original sample.

The resulting data will have gaze angles interpolated, while all other attributes (e.g., pupil size) are taken unchanged from the previous low-frequency sample.

Since this processor will not try to create a polynomial for a window that contains invalid samples, it is highly recommended preceding this processor with a processor that handles invalid data.

Notes

  • This filter must not be used on data with non-monotonic time stamps.
  • This filter will allocate memory each invocation.

Sample Config

{
  "SavitzkyGolaySmoothing": {
    "polynomial_degree": 2,
    "half_window_size": 2
  }
}

Configuration Options

  • half_window_size - The number of additional samples on either side of the target sample to use when fitting a polynomial curve. Total window size = 2 * half_window_size + 1 and it must be >= than the polynomial_degree + 1.
  • polynomial_degree - The order of the polynomial function to fit in the window.

References

Savitzky, A. and Golay, M.J.E. (1964) Smoothing and Differentiation of Data by Simplified Least-Squares Procedures. Analytical Chemistry, 36, 1627-1639.

10.1021/ac60214a047


Fusion Stages

At the end of every pipeline stands the desire to make gaze-related statements about certain points in time, e.g., was the user fixating at time t?

Fusion both provides a temporal grid of such points t for which predictions will be made, and also handles how different sensor sources should be merged for each such point.

Empty

Always returns no data, used mainly for testing.

Notes

  • This filter may be applied to unsorted data.
  • The output of this filter at position i is an interpretation of what happened between fused events i-1 and i. This means in particular that the value at position 0 is meaningless.

Sample Config

{
  "Empty": {}
}

AsIs

Fuses the n-th element of each stream.

This fusor assumes all sensor streams have the same length. It will then create fused events where the n-th output sample contains the n-th input sample from each source respectively.

Although this sounds like a good idea, this filter is probably not want in most situations. In reality, eye tracking and head data arrives at different and slightly varying sampling rates (from the perspective of the game engine’s update loop); also, samples might for various reasons be unordered or have unordered timestamps. This filter ignores all of these issues and just assumes all data is perfectly processed so it can just be merged. If this assumption is violated its results are undefined.

However, if the data is processed accordingly (e.g., when you use Ocumen Pipelines inside your own pipeline architecture) this filter will ensure all sensor data will go through fusion untouched.

This filter will fail the pipeline if the data streams have different lengths.

Notes

  • This filter may be applied to unsorted data.
  • The output of this filter at position i is an interpretation of what happened between fused events i-1 and i. This means in particular that the value at position 0 is meaningless.

Sample Config

{
  "AsIs": {}
}

TakeFrameClosestEyeTracking

Takes a sample of frame data and fuses it with the (temporally) nearest eye tracking event.

This fusion step is ideal when you want to process all pose events, but are fine dropping some eye tracking events. For

Notes

  • This filter must not be used on data with non-monotonic time stamps.
  • The output of this filter at position i is an interpretation of what happened between fused events i-1 and i. This means in particular that the value at position 0 is meaningless.

Sample Config

{
  "TakeFrameClosestEyeTracking": {}
}

TakeFrameInterpolateEyeTracking

Takes a sample of frame data and fuses it with interpolated advanced eye tracking data.

For the given interpolation time t, it will find a pair a (before), b (after), and linearly interpolate the values for the relative time between them, weighing a and b respectively.

This fusion step is ideal when you want to process all pose events, and want to interpolate the most likely eye tracking sample for exactly the given pose time.

Notes

  • This filter must not be used on data with non-monotonic time stamps.
  • The output of this filter at position i is an interpretation of what happened between fused events i-1 and i. This means in particular that the value at position 0 is meaningless.

Sample Config

{
  "TakeFrameInterpolateEyeTracking": {}
}

TakeEyeTrackingClosestFrame

Takes a sample of advanced eye tracking data and fuses it with the (temporally) nearest frame data.

This fusion step is ideal when you want to process all eye tracking data and want to make sure that neither eye tracking, nor pose data will be altered. However, certain pose events might be dropped, for example if the eye tracking sampling rate is much higer than the pose sampling rate.

Notes

  • This filter must not be used on data with non-monotonic time stamps.
  • The output of this filter at position i is an interpretation of what happened between fused events i-1 and i. This means in particular that the value at position 0 is meaningless.

Sample Config

{
  "TakeEyeTrackingClosestFrame": {}
}

TakeEyeTrackingDefaultFrame

Takes a sample of advanced eye tracking data and fuses it with unspecified ‘default’ frame data.

This fusion step can be used for algorithms that operate purely on eye tracking data and do not need valid fusion data.

Notes

  • This filter must not be used on data with non-monotonic time stamps.
  • The output of this filter at position i is an interpretation of what happened between fused events i-1 and i. This means in particular that the value at position 0 is meaningless.

Sample Config

{
  "TakeEyeTrackingDefaultFrame": {}
}

TakeEyeTrackingInterpolateFrame

Takes a sample of eye tracking data and fuses it with interpolated frame data.

For the given interpolation time t, it will find a pair a (before), b (after), and linearly interpolate the values for the relative time between them, weighing a and b respectively.

This fusion step is ideal when you want to process all eye tracking data, and want to interpolate the most likely pose data for exactly the given eye tracking moment.

Notes

  • This filter must not be used on data with non-monotonic time stamps.
  • The output of this filter at position i is an interpretation of what happened between fused events i-1 and i. This means in particular that the value at position 0 is meaningless.

Sample Config

{
  "TakeEyeTrackingInterpolateFrame": {}
}

Fused Processors

After fusion happend is another chance to apply post-processing steps and will be applied to a fused sample simultaneously.

FailPipelineDeltaTimeAbove

Fails the pipeline if the time between two adjacent samples is larger than the given threshold.

This processing stage can be used as a quality assurance inside a pipeline. When set with a reasonable cutoff time (e.g., 1s) it can be used to discard sessions where the stream of sensor data was disrupted.

Notes

  • This filter may be applied to unsorted data.

Sample Config

{
  "FailPipelineDeltaTimeAbove": {
    "time": "1 s"
  }
}

Configuration Options

  • time - The time in µs above which to fail.

FailPipelineDeltaTimeBelow

Fails the pipeline if the time between two adjacent samples is lower than the given threshold.

This filter is particularly useful for sorting out sessions where sensor data was not monotonic. As such, it should be used at the end of the processing pipeline.

Note: If this were the last processing stage before fusion and would fail if set to 0, it means the sensor data was not properly ordered. If in such a situation this filter were removed all subsequent algorithms would yield undefined (i.e., wrong) results.

In other words, we strongly recommend to keep this filter as a last step in any processing pipeline to ensure all subsequent processing steps operate on valid data.

Notes

  • This filter may be applied to unsorted data.

Sample Config

{
  "FailPipelineDeltaTimeBelow": {
    "time": "1 us"
  }
}

Configuration Options

  • time - The time in µs below which to fail.

SortByTime

Sorts the data by time in ascending order.

This filter can be used if the time stamps are known to be correct, but the data might have been unordered (e.g., when different threads collect and append sensor data to the same queue). In that case the data can be re-sorted before processing, according to their time stamps.

Notes

  • This filter may be applied to unsorted data.

Sample Config

{
  "SortByTime": {}
}

Algorithms

Algorithms act on processed, fused data and produce outputs as described in their section.

FixationDispersionAngles

A white-label dispersion detector checking if gaze data is within certain thresholds.

A generic fixation filter based on a dispersion window. The algorithm works approximately like this:

  1. The output starts in the “no fixation” state
  2. Once sufficiently many samples (based on the configured thresholds) fall inside a given window a fixation is emitted.
  3. If subsequent samples all within the same thresholds the fixation is continued, if not, the outlier count is increased.
  4. Once the outlier count is exceeded, the fixation is ended.

This filter has a good balance between computational performance and accuracy and is suited for situations where eye tracking quality is good, and your primary concern is detecting periods of relative stillness. However, in situations where slow smooth pursuits dominate this filter will probably deliver a bad performance.

For performance reasons in long fixations, the fixation centroid is computed over the last at-most N samples, where currently N = 16.

Notes

  • This filter must not be used on data with non-monotonic time stamps.
  • The output of this filter at position i is an interpretation of what happened between fused events i-1 and i. This means in particular that the value at position 0 is meaningless.

Sample Config

{
  "FixationDispersionAngles": {
    "max_angle_for_fixations": "3 deg",
    "min_duration_for_fixation": "100 ms",
    "max_outliers": 1
  }
}

Configuration Options

  • max_angle_for_fixations - How close subsequent gaze direction vectors have to be to detect a fixation.
  • max_outliers - Maximum number of outliers allowed before a fixation is aborted.
  • min_duration_for_fixation - For how long gaze has to be within a certain direction to consider it a fixation.
  • velocity_config - How the window over velocities should be computed, see VelocitiesGazeWindowed.

Outputs

  • left.is_fixation (bool) - Left eye. True if a fixation was detected on that eye’s data.
  • right.is_fixation (bool) - Right eye. True if a fixation was detected on that eye’s data.

References

Holmqvist, K, Nyström, M, Andersson, R, Dewhurst, R, Halszka, J & van de Weijer, J 2011, Eye Tracking : A Comprehensive Guide to Methods and Measures. Oxford University Press.


FixationSimple

An unspecified fixation detector.

This filter will always point to the best, unspecified fixation filter available. Although no guarantees are made, this filter should give a good out-of-the-box performance if you are only interested in “detecting fixations”.

We only recommend to use this filter in cases where you do not need to control which algorithm runs, and if you are fine with changing behavior between versions.

Notes

  • This filter must not be used on data with non-monotonic time stamps.
  • The output of this filter at position i is an interpretation of what happened between fused events i-1 and i. This means in particular that the value at position 0 is meaningless.

Sample Config

{
  "FixationSimple": {}
}

Outputs

  • is_fixation_left (bool) - True if a fixation was detected on left eye data.
  • is_fixation_right (bool) - True if a fixation was detected on right eye data.

FixationTobiiIVT

Fixation filter based on the Tobii I-VT algorithm.

The algorithm description can be found in the referenced paper. Simplified, it works as follows:

  1. Pre-filtered (see below) eye tracking data is accepted.
  2. Velocity calculations with the given window configuration are performed.
  3. A simplified dispersion is run over the velocities to detect raw fixations.
  4. Neighboring raw fixations are merged, and fixation periods too short are being discarded.

The Tobii I-VT filter is suited for situations where eye tracking quality is good, and your primary concern is detecting periods in which the eye does not move. Like other dispersion algorithms it is particularly unsuited for recordings where smooth pursuits dominate.

The algorithm is implemented approximately as described in the referenced paper, with the following exceptions:

  • Section 2.3., 2.4: No explicit options are provided in this filter configuration to perform low-pass filtering or gap fill, as these concerns are handled more generically in our processing stages.

  • Section 2.5: Eye selection does not apply to this filter, as we generally operate on each eye individually and expose this information separately.

Notes

  • This filter must not be used on data with non-monotonic time stamps.
  • The output of this filter at position i is an interpretation of what happened between fused events i-1 and i. This means in particular that the value at position 0 is meaningless.

Sample Config

{
  "FixationTobiiIVT": {
    "detect_fixation_if_angle_speed_below_angle_per_sec": "30.000002 deg",
    "discard_fixations_below": "100 ms",
    "merge_adjacent_when_gap_below": "100 ms",
    "merge_adjacent_when_angle_below": "3 deg"
  }
}

Configuration Options

  • detect_fixation_if_angle_speed_below_angle_per_sec - Threshold above which fixation won’t be detected anymore.
  • discard_fixations_below - Fixations with a duration below this threshold will be discarded.
  • merge_adjacent_when_angle_below - Fixations with angular differences below this one can be merged into one.
  • merge_adjacent_when_gap_below - Fixations separated at most this interval can be merged into one.

Outputs

  • left.is_fixation (bool) - Left eye. True if a fixation was detected.
  • right.is_fixation (bool) - Right eye. True if a fixation was detected.

References

Olsen, Anneli. The Tobii I-VT fixation filter. Tobii Technology (2012): 1-21.


PupilLamThompsonCorbett1987 b

Computes the size difference between pupils and related metrics.

Usage recommendations:

  • When running this algorithm you must ensure both eyes are equally illuminated by ‘dim’ light within the VR head set.
  • Employ a scene design where both eyes see the same stimulus and receive similar illumination levels for sufficient time (20+ seconds) before the measurement are evaluated
  • A straight ahead, distant stimulus must be used.
  • The user should not wear any glasses or contact lenses.

In addition, check the underlying hardware guarantees regarding the accuracy of pupil signal.

Notes

  • This is an unreleased beta filter. It should work, but we would love to hear feedback.
  • This filter may be applied to unsorted data.
  • The output of this filter at position i matches the fused event i.

Sample Config

{
  "PupilLamThompsonCorbett1987": {
    "loewenfeld_criterion_mm": 0.4,
    "ltc_criterion_ratio": 0.15
  }
}

Configuration Options

  • loewenfeld_criterion_mm - The Loewenfeld criterion to use, minimum difference in mm to consider for anisocoria.
  • ltc_criterion_ratio - The Lam, Thompson & Corbett criterion to use, minimum difference in relative area to consider for anisocoria.

Outputs

  • left.pupil_area_mm2 (f32) - Left eye. Computed pupil area.
  • right.pupil_area_mm2 (f32) - Right eye. Computed pupil area.
  • difference_mm (f32) - The observed size difference in mm.
  • loewenfeld_anisocoria (bool) - Was a pupillary inequality of at least 0.4mm observed?
  • ltc_anisocordia (bool) - Was an inequality of 15% surface area observed?

References

Lam, B. L., Thompson, H. S., & Corbett, J. J. (1987). The Prevalence of Simple Anisocoria. American Journal of Ophthalmology, 104(1), 69–73.

10.1016/0002-9394(87)90296-0


SaccadeNonFixation

A simple detector that classifies everything a saccade that is not a fixation.

Internally the filter invokes the given fixation filter and marks all outputs a saccade that were not marked as a fixation.

This algorithm was sometimes used in slow desktop eye tracking systems, but unless you have a very special setting, you should probably not use this filter since there are many reasons why a non-fixation is not a saccade. Most prominently, when an eye turns invalid some fixation filters might stop reporting a fixation, which will cause a saccade to be detected.

Notes

  • This filter must not be used on data with non-monotonic time stamps.
  • The output of this filter at position i is an interpretation of what happened between fused events i-1 and i. This means in particular that the value at position 0 is meaningless.

Sample Config

{
  "SaccadeNonFixation": {
    "fixation": {
      "FixationSimple": {}
    }
  }
}

Configuration Options

  • fixation - The fixation detection algorithm to use.

Outputs

  • is_saccade_left (bool) - Whether a saccade was detected on left eye data.
  • is_saccade_right (bool) - Whether a saccade was detected on right eye data.

SaccadeSimple

An unspecified saccade detector.

This filter will always point to the best, unspecified saccade filter available. Although no guarantees are made, this filter should give a good out-of-the-box performance if you are only interested in “detecting saccades”.

We only recommend to use this filter in cases where you do not need to control which algorithm runs, and if you are fine with changing behavior between versions.

Notes

  • This filter must not be used on data with non-monotonic time stamps.
  • The output of this filter at position i is an interpretation of what happened between fused events i-1 and i. This means in particular that the value at position 0 is meaningless.

Sample Config

{
  "SaccadeSimple": {}
}

Outputs

  • is_saccade_left (bool) - Whether a saccade was detected on left eye data.
  • is_saccade_right (bool) - Whether a saccade was detected on right eye data.

SaccadeSmeetsHooge2003

Detects saccades, including onset and offset.

The algorithm description can be found in the referenced paper. Simplified, it works as follows:

  • Calculate angular gaze velocity based on provided configuration.
  • Find peaks in velocity output.
  • For each peak:
    • Calculate average fixation velocity of the fixation preceding the saccade
    • Find the onset by walking backwards from the peak until gaze velocity is below 3 std deviations of fixation velocity
    • Find the offset by walking forwards from the peak until gaze velocity is below 3 std deviations of fixation velocity
    • If the peak is located between 25%-75% between onset and offset
      • Detect samples between onset and offset as saccades

This algorithm was used in an experiment where the subject saccaded between two small stimuli placed horizontally roughly 10 degrees apart with plenty of time to fixate between saccades.

The algorithm uses absolute velocity for thresholding onset and offset of saccades and is therefore sensitive to glissades. Since glissades may have an absolute velocity above the offset threshold, this can delay the offset of the saccade. A consequence of prolonging the saccade combined with the limitation that the peak velocity must occur in the middle of the saccade is that a valid saccade can be discarded due to peak velocity being attained too early. To avoid this you can lower the early_peak_limit config option.

Glissades may occur on only one eye at a time, causing saccade output from this algorithm to differ for left and right eye.

The option to consider a direction change to signal the offset of a saccade is not part of the paper and was introduced to try to remedy this algorithms sensitivity to glissades.

In the experiment, this algorithm was used with polynomially fitted velocity data upsampled to 1000 Hz. Using a lower resolution will impact accuracy of the onset and offset time of saccades.

Notes

  • This filter must not be used on data with non-monotonic time stamps.
  • The output of this filter at position i is an interpretation of what happened between fused events i-1 and i. This means in particular that the value at position 0 is meaningless.

Sample Config

{
  "SaccadeSmeetsHooge2003": {
    "lower_threshold_angle_per_sec": "75 deg",
    "fixation_sigma_threshold": 3.0,
    "fixation_velocity_window_start": "200 ms",
    "fixation_velocity_window_size": "100 ms",
    "fixation_velocity_minimum_angle_per_sec": "5 deg",
    "gaze_velocity_config": {},
    "early_peak_limit": 0.25,
    "late_peak_limit": 0.75,
    "detect_offset_when_velocity_direction_changes": false
  }
}

Configuration Options

  • detect_offset_when_velocity_direction_changes - Consider a velocity direction change after a velocity peak the offset of the saccade. Enabling this behavior will significantly reduce false negatives introduced by glissades. This behavior is not in the paper that this algorithm implements and is therefore off by default.
  • early_peak_limit - If the velocity peak appears before this normalized time within the saccade, the saccade is discarded.
  • fixation_sigma_threshold - How many standard deviations from mean fixation velocity to detect onset and offset of a saccade
  • fixation_velocity_minimum_angle_per_sec - The minimal velocity to use as threshold for detecting onset and offset of saccades. Low noise in the preceding fixation can result in a low threshold for the saccade offset, preventing its detection. This setting safeguards against such effects by ensuring a minimum threshold.
  • fixation_velocity_window_size - The size of the window used to compute the average velocity of the fixation preceding the saccade.
  • fixation_velocity_window_start - The time before the saccade peak to start calculating average fixation velocity.
  • gaze_velocity_config - Configuration for the velocity filter used internally to classify saccades.
  • late_peak_limit - If the velocity peak appears after this normalized time within the saccade, the saccade is discarded.
  • lower_threshold_angle_per_sec - The velocity threshold for detecting a saccade.

Outputs

  • is_saccade_left (bool) - Whether a saccade was detected on left eye data.
  • is_saccade_right (bool) - Whether a saccade was detected on right eye data.

References

Smeets and Hooge. Nature of Variability in Saccades. J Neurophysiol 90: 12-20, 2003.


SpectralRFFTGaze b

Performs a RFFT on the gaze angles.

This filter first decomposes eye tracking data into horizontal and vertical gaze angles for each eye. It then iterates over all values with a sliding window of rfft_length, center-aligned and with mean 0. and computes the rfft for each window. A successful run results in a series of N/2 complex numbers, out of which the complex magnitude is computed, and normalized by the array length N.

This function should produce similar results to the following SciPy pseudocode:

assert(len(gaze_window_left_x), rfft_length)
tmp = scipy.signal.rfft(gaze_window_left_x)
result = numpy.abs(tmp) / rfft_length
assert(len(result), rfft_length / N)

The returned frequency component i depends on the sampling rate and rfft_length and will also be returned. It can be estimated by freq[i] = i * sample_rate / rfft_length.

This filter can help to detect periodic events. As mentioned, the returned frequency components depend on the input data. We therefore recommend applying preprocessing steps resulting in predictable gaze sampling rates.

Notes

  • This is an unreleased beta filter. It should work, but we would love to hear feedback.
  • This filter must not be used on data with non-monotonic time stamps.
  • The output at each position has special meaning and should be carefully checked according to the abovce documentation.

Sample Config

{
  "SpectralRFFTGaze": {
    "step_length": 1,
    "handle_missing": {
      "value": 0.0
    },
    "windowing": {
      "blackman": {
        "alpha": 1.0
      }
    },
    "rfft_length": 16
  }
}

Configuration Options

  • handle_missing - How to treat invalid data. For now specific values can be provided, e.g., "value" : 0.0.
  • rfft_length - How many samples to include in the window (allowed max is 32).
  • step_length - On high frequency data it can be useful to not process all samples to reach a desired frequency. When a sample n is included in the FFT computation, the next sample to include is n + step_length.
  • windowing - The window function to use, can be { "Blackman": { "alpha": 0.5 } }, { "Hamming": { "a0": 0.5 } } or can be omitted for no window function.

Outputs

  • frequencies (vecf32x32) - Describes what frequencies the numbers in fft_local_rot_ represent.
  • left.fft_local_rot_x (vecf32x32) - Left eye data. Power for frequency components along X-axis rotation.
  • left.fft_local_rot_y (vecf32x32) - Left eye data. Power for frequency components along Y-axis rotation.
  • left.fft_pupil (vecf32x32) - Left eye data. Power for frequency components of pupil change.
  • right.fft_local_rot_x (vecf32x32) - Right eye data. Power for frequency components along X-axis rotation.
  • right.fft_local_rot_y (vecf32x32) - Right eye data. Power for frequency components along Y-axis rotation.
  • right.fft_pupil (vecf32x32) - Right eye data. Power for frequency components of pupil change.

References

Smith, S. (2002). Digital signal processing. London: Newnes. ISBN: 075067444X.

10.1016/B978-0-7506-7444-7.X5036-5


VelocitiesGazeInstantaneous

Computes instantaneous angular gaze velocities.

The velocities are calculated to be the deltas between two adjacent frames. All angles are reported relative to the user’s ‘nose vector’.

Notes

  • This filter must not be used on data with non-monotonic time stamps.
  • The output of this filter at position i is an interpretation of what happened between fused events i-1 and i. This means in particular that the value at position 0 is meaningless.

Sample Config

{
  "VelocitiesGazeInstantaneous": {}
}

Outputs

  • left.local_angular_speed_x_deg (f32) - Left eye data. Instantaneous rotation speed around X-axis, degrees per second.
  • left.local_angular_speed_y_deg (f32) - Left eye data. Instantaneous rotation speed around Y-axis, degrees per second.
  • left.local_rotation_x_deg (f32) - Left eye data. Rotation around X-axis, clockwise, 0 is straight ahead, PI is straight up.
  • left.local_rotation_y_deg (f32) - Left eye data. Rotation around Y-axis, clockwise, 0 is straight ahead, PI is left.
  • left.local_angular_speed_deg (f32) - Left eye data. Shortest path (3D angle) instantaneous speed, degrees per second.
  • right.local_angular_speed_x_deg (f32) - Right eye data. Instantaneous rotation speed around X-axis, degrees per second.
  • right.local_angular_speed_y_deg (f32) - Right eye data. Instantaneous rotation speed around Y-axis, degrees per second.
  • right.local_rotation_x_deg (f32) - Right eye data. Rotation around X-axis, clockwise, 0 is straight ahead, PI is straight up.
  • right.local_rotation_y_deg (f32) - Right eye data. Rotation around Y-axis, clockwise, 0 is straight ahead, PI is left.
  • right.local_angular_speed_deg (f32) - Right eye data. Shortest path (3D angle) instantaneous speed, degrees per second.

VelocitiesGazeWindowed

Computes windowed angular gaze velocities.

This filter computes relative velocities for-and-between a series of eye tracking samples. The exact details depend on the window_config and looks like this:

"window_config": {
    "window_type": {
        "two_fixed": {
            "index_before": -1,
            "index_after": 0
        }
    },
    "padding": "ignore"
}

The property window_type can take three variants; each variant determines how an output velocity is computed from the available gaze data:

  • aggregate_times - Each output velocity is aggregated based on all neighboring pairs of velocities for elements between the given, relative time stamps.
  • aggregate_fixed - Each output velocity is aggregated based on all neighboring pairs of velocities for elements between the given, relative time indices.
  • two_fixed - Each output velocity is computed as the direct delta between the two given indices.

Values for aggregation can be average, median, max, min, absaverage, absmax and absmin.

The padding setting must be set to ignore, and is not implement at the moment.

Notes

  • This filter must not be used on data with non-monotonic time stamps.
  • The output of this filter at position i is an interpretation of what happened between fused events i-1 and i. This means in particular that the value at position 0 is meaningless.

Sample Config

{
  "VelocitiesGazeWindowed": {
    "window_config": {
      "window_type": {
        "two_fixed": {
          "index_before": -1,
          "index_after": 0
        }
      },
      "padding": "ignore"
    }
  }
}

Configuration Options

  • window_config - How the window over velocities should be computed.

Outputs

  • left.local_angular_speed_x_deg (f32) - Left eye data. Instantaneous rotation speed around X-axis, degrees per second.
  • left.local_angular_speed_y_deg (f32) - Left eye data. Instantaneous rotation speed around Y-axis, degrees per second.
  • left.local_rotation_x_deg (f32) - Left eye data. Rotation around X-axis, clockwise, 0 is straight ahead, PI is straight up.
  • left.local_rotation_y_deg (f32) - Left eye data. Rotation around Y-axis, clockwise, 0 is straight ahead, PI is left.
  • left.local_angular_speed_deg (f32) - Left eye data. Shortest path (3D angle) instantaneous speed, degrees per second.
  • right.local_angular_speed_x_deg (f32) - Right eye data. Instantaneous rotation speed around X-axis, degrees per second.
  • right.local_angular_speed_y_deg (f32) - Right eye data. Instantaneous rotation speed around Y-axis, degrees per second.
  • right.local_rotation_x_deg (f32) - Right eye data. Rotation around X-axis, clockwise, 0 is straight ahead, PI is straight up.
  • right.local_rotation_y_deg (f32) - Right eye data. Rotation around Y-axis, clockwise, 0 is straight ahead, PI is left.
  • right.local_angular_speed_deg (f32) - Right eye data. Shortest path (3D angle) instantaneous speed, degrees per second.