Real-Time Fire Simulations
May 15, 2012, GPU Technology Conference, San Jose, CA—Christopher Horvath, Global Technology Technical Director from Pixar, described the techniques needed to make realistic fire images. Even thought the hardware and software tools have improved dramatically over time, the images still look too “computerish”.
Fire images need a lot of artistic interpretation and manipulation to achieve a realistic image. The basic physics and image generation capabilities are available, but they need a lot of work. It's easy enough to simulate fire motion with CUDA, and get a good first approximation.
The basics for the simulations started with people doing developments in Eulerian fluids. Stein displayed some of the first efforts in '99 at SigGraph. Harris added a 2-D fluid solver in '04 and a 3-D fluid solver running on a GPU in '07.
The next breakthrough was in apex turbulence in '10, which brought out the capability to translate a grid to follow an object, and render the flames as particles. This inspiration resulted from the ability to make high resolution 2-D slices of a 3-D simulation. The simulation is seeded using particle simulations.
For many in the field, the goal is to make realistic interactive fire. Most developers use 2-D sprites or clips of actual fire for their fire images. The advantage of this technique is high resolution, non-repeating sequences, it is cheaper to code, requires less storage, and other adantages. The main disadvantages are in computation time and the low level of control for the artist developing the flames.
To implement realistic looking fire, they start with a standard 2-D fluid solver in CUDA. The pitch and linear textures are input into the simulator where a geometric multi-grid solver defines the flows. The resulting images are rendered in OpenGL. The basic recipe is to start from a smoke simulation with velocity and dust and feed the results into new channels with temperature, fuel, and noise. The simple combustion model includes heat which increases buoyancy forces.
Here are some tips for better looking fire. First, get the colors right by mapping the temperatures. Next, use high quality aduction and high quality filters and experiment with the effects of this particle-based image.
Third, use the highest resolution possible, because it gives the greatest latitude in post processing. Fire is hot, so communicate the temperature by adding glow and blur to the high resolution rendered images. Add some motion blur to provide some heat distortion and offset the background levels or gradient of temperatures. Add some tips at the ends of the flames.
Add noise to the image to force more turbulence along with high levels of vorticity confinement. Use procedural control noise to modulate the fire density. And finally, add embers to the fire. The added particles are passively advected by the velocity fields and show the air energy in the empty regions. Motion blur is drawn as quads that stretch between previous and current locations using geometry stretch. Infer the temperature from the simulations and allow the various parts of the fire to cool over time. Finally, add scattering to make the fire sections more random.
Simon Green from nVidia described the actual physics for the black-body radiation in a fire. The appearance should be of a black body with mostly red-orange-yellow colors plus some soot and smoke. The radiation is the result of solving Planck's law in discrete simulations taken in 5 nm steps from 38- to 780 nm.
The 81 steps represent the spectral emissions for the fire, and depend upon the types of fuels and mixtures of oxygen for the fire. The output spectra have to be related to the spectral response of the imager. In most cases, this is a RGB distribution, which correlates to the responses of the large, medium, and small cones in the eye.
The major issue is that the CIE XYZ ranges for RGB color sets have a response that allows the red and green responses to go negative on part of the distribution. As a result, a 1300 K image via CIE looks computer generated, because it has too much blue. Therefore, designers need to match the curves in their images to the human response curves.