Jon Peddie Research at Siggraph
August 8. 2012, Siggraph, Los Angeles—The Jon Peddie Research organization presented some research findings on the growth of hardware for the computer graphics markets. After the presentation, a short panel discussed the impact of multi-core processors on their graphics work.
Although growth has been uneven, it has been averaging over $10B per decade, and seems to be growing faster in the past 5 years. Last year, game consoles accounted for about $15B of the total $42 B in hardware, which includes gaming PCs, consoles, workstations, and high-end monitors for graphics.
Graphics software totaled almost $14B, spread across vector graphics, imaging, visualization and simulation, digital video, modeling and animation, 2-D animation Cell, and CAD/CAM. Both markets are expected to continue their growth in the next three years.
Multi-core processors and accelerators have been fairly flat until about '05, and have seen dramatic growth rates since. First the CPUs got multiple cores, then the GPUs. Some time around '09, the average number of cores in a graphics computer crossed the 500 core threshold. Now, the number of cores is doubling every 2-4 years. The latest AMD GPU contains over 1600 cores and the Nvidia chip has 1500.
The proliferation of processing cores makes it possible to create computer graphics with finer details and smoother edges. The latest tools enable photorealistic shadows and highlights in still images. But that is a problem. People can easily tell when a character in a movie is not real. Getting the hair and facial features close to human is a lot of work, but the fine nuances of facial features is missing. We have no problems if the characters are highly animated cartoons, but the visual standards for live action currently exceed the capabilities of the computers and software.
One line of progress is to increase the resolution of the image. More pixels means smoother edges and finer gradation of textures and shadows. The challenge is perform all the processing on the images to run at the projection frame rate of 24 fps. Moving to 4k or 8k resolution, and higher frame rates exacerbates the processing problems.
Even with better resolution, and the ability to model fine-grain features like hair and smoke, it is still very difficult to get the skin right. Skin tones are the result of subsurface scattering, and not just surface reflections. As a result, there is a lot of subtle light interactions that are very hard to simulate. Therefore, we get cartoon people, distorted humanoids and robots, or distance shots of people in CG.
Peddie moderated the panel comprised of Jacob Rosenberg a technology expert, Phil Miller from nvidia, Rob Powers from LightWave, David Forrester from LightWorks, and James McCombe from Caustic Graphics.
Relevance of cores?
Rosenberg noted that a post facility can use a farm or GPUs plus some Cuda software to accelerate all graphics functions. They can preprocess many of the files, reducing the time and costs before going to the big facility for final rendering. Some of the on-site functions include color correction and grain addition. The addition of grain is a very GPU intensive process. Other functions include compression and decompression, noise reduction. The GPUs accelerate playback and accelerate the integration of effects in post.
Miller added that increasing processing speeds allows better work load profiles and load balancing across the parallel processes.
Powers offered a software perspective. The challenges of creating realistic humans in CG is only part of the issue. Better processors allow the technology to support the artist. The improvements permit the artist to view images in near-real time to better service the story and enable individual interactive visualizations. The problem is that making software for multiple core processors is harder than for uniprocessors due to the problems of parallelism.
Rosenberg noted that location shots capture the physical element and the CG layers, shadows, etc. can be viewed without rendering.
Forrester stated that advertising is moving completely to CG. Render is CPU/GPU intense, so users need to address other level platforms. The challenge is scaling performance for rendering and ray tracing.
McCombe contended that performance and multi-core have moved from workstations to PCs and lately, to laptop computers. The more difficult functions are being handled in custom circuits. Multi-core addresses the availability for use by moving performance and size to smaller scale platforms. The latest machines can do ray tracing for games and real-time applications fast and with minimum power. The form factors let more people participate in creation work.
Access to a large number of cores or the cloud?
Powers responded that the software interface is the driver. Lots of other background functions can be cloud based with control at the tablet level. The key is to separate control and major function processing.
Miller concatenated that virtualized GPUs are available. The VGX software and hardware based on the Kepler chips are available in remote access form. The limit is in ping time and network bandwidth. Integrating the h.264 encoding into the flow helps.
Forrester disagreed, saying that on-line render is not realistic. The market is not requesting this service because users want render in-house. The issue is data security.
Powers rejoined that all cloud services have some IP protection. The efforts are in developing local private cloud and not depending upon the pubic cloud. Security is easier if everything is self-contained.
Miller stated that local cloud operations are not needed unless speed is an issue.
McCombe observed that outsourced cloud hardware put the creative people on the other side of a small pipe. The use interface and interconnect hardware are bandwidth limited. A cloud service can be complementary to in-house and in-box graphics functions.
Increasing cores does not imply increased throughput?
Miller said that this is a software issue. It takes time to tune the software for the tools, and the GPUs are changing at a high rate. CAD type work is communications intensive, so presents a data-flow issue. Users need to clean up their graphics pipelines.
Powers observed that software is all about optimization. The CPU used to be the key, because you got a free speed up with each new processor generation. Now, however, most code is not written for parallel operations. It takes about 2-3 years to implement major architectural changes in software, so the next generation software will start showing up next year.
McCombe appended it will take different algorithms, since most existing algorithms have fundamental serial dependencies. The software has to adapt to the hardware architecture changes to increase performance.
Bandwidth to the cloud?
Powers answered that three things matter: depends upon the software, changes to data only or to the full data set, and changing the system to separate the user interface and the processing.
Miller rejoined that bandwidth is not the issue. The quality and latency are the real issues.