BEYOND RAY TRACING
Why Nvidia’s RTX tech is just the start
“RAY TRACING is the holy grail of gaming graphics, simulating the physical behavior of light to bring real-time, cinematic quality rendering.” ” That’s the pitch from Nvidia for its Turingpowered RTX graphics cards, complete with their hardware ray-tracing cores. The company touts them as its “most important innovation in computer graphics in more than a decade.” Big talk, but then ray tracing does offer jaw-dropping graphics, where every reflection, refraction, and shadow is rendered to pixel-perfect perfection. Who could not want to chase this?
Since the 1990s, games have used rasterization, a relatively simple way of turning 3D points in space into a flat image. It’s not capable of being photorealistic, but it can do a fairly good job, and do it quickly. Various tricks have added to the realism, such as shadow mapping, ambient occlusion, depth of field effects, and so forth. Graphics cards and APIs have been designed around accelerating the process ever since the 3dfx Voodoo moved over from the arcade machines in 1996.
However, advances have slowed over the last few years, the devices used to squeeze more realism out of the process are becoming more convoluted, and the huge jumps in visual quality made by games from 10 years ago have slowed down considerably. At its heart, rasterization is only a rough approximation of the world—it’s based around objects, rather than light. Moving to a ray-traced model is a fundamentally different approach.
RAY TRACING is rendering technique that—and this is just the ridiculously quick and simple explanation—traces a path from an imaginary eye through each pixel on a screen, and calculates the color of the pixel when it intersects with an object. The more depth you build into the physical modeling of light, the more realistic it gets. Adding translucency, refraction, reflections, fluorescence, diffusion, and other optical properties all add complexity as more rays need to be cast at the scene.
To achieve decent results using such a method involves a lot of rays of light— thousands and thousands. The amount of math involved quickly multiplies alarmingly. Setting limits, such as the number of times a ray can be reflected, and the number of rays per pixel, can help keep rendering speeds to practical levels, but it remains a deeply computationally intensive task. There is always a compromise between speed and realism.
Ray tracing was born from ray casting, developed by Arthur Appel in 1968, a relatively simple system that cast a single ray into the scene to sample a pixel’s color when it intersected with an object. This was expanded in 1979 to full ray tracing by splitting the ray at the point of intersection into three types: reflection, refraction, and shadow. The level of realism jumped hugely. Early demonstrations involved a lot of mirrored balls and checkerboard patterns. Rendering times were horrendous, and it remained a niche pursuit until the computational power it needed became widely available.
Ray tracing has a number of strengths over traditional methods. It excels at reflections, hence those mirrored balls. Rasterization engines can approximate these using environmental maps, which work well enough at long range, but when the object and reflective surface are close together, the effect starts to fall apart. Ray tracing produces accurate reflection out of the box, including reflections of reflections, something that is almost impossible using rasterization.
Transparency is another strong point. Adding this effect using rasterization requires complicated ordering of transparent polygons, and careful reference to the rendering orders. Raytraced images handle this simply: Just set a material’s transparency level, and the light rays fall into place. Shadows are similarly easily handled. No need for memory-intensive shadow maps. Lastly, there’s the ease with which it handles curved surfaces—no need for tessellation,