Wednesday, April 23rd, 2014

Advanced Graphics on Next-Gen Mobile – IGIC

0

 November 3, 2011, International Gaming Innovation Conference, Orange, CA--Wolfgang Engel from Confetti Special Effects described the techniques for managing cinematic- quality images on mobile platforms.

The biggest challenges for mobile platforms are the gamma corrections and high dynamic range rendering for high definition images. These capabilities are already available in consoles and PCs with advanced graphics cards and now are moving into mobile devices with similar graphics IP.

Light and shadow are the key ingredients of any image. Gamma control tries to adjust the range of highlights and shadows to fit into the useable range of the display device. Dynamic lighting and multiple light sources make the process much more difficult to mange. Similarly shadowing and texture filtering require an increased range in the darker areas. The available dynamic range of most displays limits the headroom available for the full range of images.

When gamma control is used to try and manage the intensity range, most automated algorithms apply different amounts of compression to different colors. This difference causes compression artifacts like banding and blackouts in the dark areas and whiteouts in the light areas. Because modern technologies can render many points, it is better to remove all gamma corrections until the last stage, just before going to the monitor.

The 8-bit frame buffers commonly used have limited dynamic range and the incoming texture data is decompressed before the buffer. The balance of the data should run through the decompression function by hand to minimize other artifacts. The frame buffers hold the data before sending the data to an RBG monitor. More buffers can be added to handle the textures after the objects are processed. These steps are needed because the color manipulations cannot be processed in the game.

High dynamic range rendering is usually a effect put in to post to mimic the behavior of the eye. The range from starlight to sun-lit snow is over 1012 while the range from shadow to highlight in a normal scene is 104. The normal range for a person is about 103, but the window can move with light adaption. A monitor can only handle about 102 but with a fixed luminance. Since the eye adjusts to the differing light levels, the rendering process has to perform the same function, but at a rate much faster than a person, to make the monitor mimic the human response.

Developers should take care of the issues by doing the following: measure the brightness / luminance for the whole scene, apply a tone-mapping algorithm to the image, slide the range to map to a monitor, and let the image bloom or glare for overload on the high end. The whole screen is used for the luminance, but remember that the log average is not the same as the arithmetic average, because the eye has a non-linear response curve.

The average luminance can be derived by decimating the whole image to a single pixel. This process results in a middle grey, something in the Ansel Adams zone 5 range. When a scene changes, this middle grey should change to capture more of the highlights, just like the eye changes. The rate of change to adjust the adaption should change with the game play, fast transitions from dark to light should take longer than slow ones.

Some examples of this changing light intensity are driving into and out of a tunnel, changing from day to night, or going from outside to inside. The developer has to change average luminance on a frame-by frame basis. The operator for night is an overall blue-grey cast to the scene and a lower average grey intensity. Spot lights are a source of errors for the lighting controls.

On the other end, blooming is when the highlights go to all white. The best way to handle this situation is to run the scene through a bright pass filter, blur the resulting image, and add the blurred image back into the scene. Otherwise, you get a glare that wipes out the rest of the details in the image. This glare is acceptable for fast transitions, because it mimics actual behavior, but is not normal if you are getting closer to the light source.

Speak Your Mind

Tell us what you're thinking...
and oh, if you want a pic to show with your comment, go get a gravatar!