Artifacts and Mitigation

This section outlines what artifacts you might encounter and how you can mitigate them.

Table of Contents

All foveated rendering results in changes to the rendered output or “artifacts”. How noticeable these artifacts are is an essential success criteria for a foveated rendering implementation.

There are no foveation techniques without artifacts. In fact, foveation is about allowing artifacts to appear due to savings in the processing pipeline, but managing their visibility. As a rule of thumb, fewer and less severe artifacts tend to mean lower overall processing savings, with exceptions to this rule mostly being severe artifacts that also yield low savings.

The content foveated rendering is applied to can influence the visibility and severity of artifacts. Mitigating these artifacts may require some changes to the content and tooling, such as the ability to exclude certain objects from foveation, or selecting an alternative foveation technique.

While for static foveation peripheral artifacts are partially hidden by lens imperfections and warping, dynamic foveation allows higher savings but is subject to the latency between the eye tracking and the images being presented to the user. As lenses, resolutions and fields of view improve, static foveation will become less effective at hiding artifacts and savings will decrease, but for dynamic foveation the opposite is true.

Some level of artifacts is usually acceptable in exchange for processing savings, but careful balancing is required to ensure that foveated rendering results in the desired savings and acceptable visual results.

In the video below, you can see a few artifacts which may occur when using various foveated rendering techniques.


The most serious artifacts affecting foveation are flickering, popping and inconsistent relative motion anywhere on the displayed images. An explanation of some of their causes is detailed below.

Although a user’s foveal region is relatively small, the effect these artifacts might produce can be particularly noticeable due to the sensitivity across almost the entire field of view. Among others, they can trigger saccades (rapid re-orientation of the eye) which can bring the problem parts of the image into sharp focus very quickly, which in turn might expose other problem areas in the then-peripheral parts of the user’s vision.

Imagery which repeatedly triggers rapid successions of saccades which can be perceived as unpleasant. Changes of detail in imagery close to the area viewed by the fovea are also readily apparent and can break the sensation of immersion as the brain is confused by the inconsistency.

Artifact Causes

Flickering and Popping

Flickering can have multiple causes but the most obvious occurs with ‘jaggies’ caused by reduced resolution rendering. Reduced texture or shader sampling frequencies are also a major cause of flickering as the lower sampling rate can cause high-lights and low-lights to be emphasized and rapidly change as the geometry moves within the scene. Lack of careful mip-map filtering can result in luminance changes between mip-levels, foveation which changes derivatives can cause changes in mip selection which result in visible popping. Large changes in geometric level of detail is another obvious cause of popping.

Inconsistent Relative Motion

This refers to motion within the image that is inconsistent with the user’s head motion and expected motion within the scene. As with flickering, inconsistent relative motion can have multiple causes. In the context of VR foveated rendering, any sharp boundaries between higher and lower detail rendering while statically located on the display, appear to move in a way that is inconsistent with user’s head motion. Changes in resolution or scaling of imagery can cause apparent acceleration and deceleration.

Variable Rate Shading

Currently available hardware implementations of Variable Rate Shading (VRS) work as an extension of MSAA hardware. As a result, rendering that would not be compatible with MSAA is generally also not compatible with VRS.

VRS reduces the frequency of pixel/fragment shader invocation relative to the number of pixels rasterized. This maintains geometric detail at the edges of primitives but reduces the detail within them replicating the shaded output to multiple pixels.

Specific issues:

  • Using VRS outside of the main rasterization pass causes ‘blockiness’ in the resulting images as the VRS hardware does not have access to information about the edges of the originally rendered geometry.
  • Large areas where detail is supplied by textures lose their detail due to effectively lower resolution that the shaders are running at. This can be exacerbated by changes in mip-selection as the VRS alters the derivatives passed into the shaders. The worst-case examples of this kind of problem can be found with parallax maps and other shaders that can change lighting and shadowing characteristics which can frequently result in significant lighting changes and flickering. In the simpler cases, the boundaries between different shading levels are often pronounced enough to be either visible in the foveal region or noticeable as an inconsistent relative motion.
  • On any size of geometry, sampling of textures whose resolution is higher than the sampling frequency of the shader accessing them can result in flickering, popping, screen-door effects and moiré patterns. Attempting to force higher mip-levels to be used than those that would be selected based on the derivatives passed into the shader (due to VRS or for any other reason) can cause this issue.
  • Transparency controlled by textures (such as leaf outlines, fences and similar) can exhibit extreme flickering as these items move between different VRS levels.
  • Translucency controlled by textures shares the problems of transparency, but can also cause large scale luminance changes in the image areas that show through it.
  • Alpha to coverage can be incompatible with some VRS implementations as some of the hardware to control this is re-purposed to control the VRS. Upcoming hardware and APIs offer alternative controls for alpha to coverage which can work around this problem, but may require some shader modification.

Mitigation Strategies

The mitigation of foveated rendering artifacts falls into 4 basic categories:


  • Overlays with high frequency detail (such as text) can be drawn in separate non-foveated passes or even drawn to separate buffers and then composited with the scene.
  • If using VRS based screen-position based foveation, individual draw-calls and even specific primitives can over-ride the VRS settings to allow the problem geometry to be rendered without foveation.
  • Large high detail textures, alpha-channel based transparency (for instance foliage) and parallax mapping can be particularly problematic with VRS and these are best handled by exclusion.


  • Applying additional filtering outside of the main high detail foveal region can hide many artifacts (particularly sharp edges and ‘jaggies’), but care must be taken to avoid introducing contrast changes.
  • Any filtering used should apply appropriate gamma conversions to maintain overall luminance, though some luminance changes are still inevitable.
  • It is important to not over-filter near the foveal region as this can itself create detail ‘popping’ artifacts.
  • Deferred rendering foveation using G-Buffer warping should match the filtering of the output to the derivatives of the warp being used. This ensures that the results are not over-filtered and can limit ‘popping’, ‘jaggies’ and contrast changes in the periphery.
  • VRS modifies shader derivatives which can change of mip selection and cause luminance and contrast popping. To combat this, extra care needs to be taken with mip-map creation to maintain luminance and contrast.  


  • Applying a temporal dither to the entire screen can be an effective method of hiding artifacts and is already common in practice in many VR renderers.
  • Excessive dithering can create an uncomfortable viewing experience so care must be taken to not over-dither.

Temporal coherence

  • Temporal filtering to preserve peripheral contrast and luminance can counter-act artifacts created by filtering, down-sampling and changes in texture and shader frequencies. This works in a similar way to temporal anti-aliasing and can re-use frame and velocity buffers that may already exist for this purpose.