HWM (Singapore)

When Reel Gets Real – The Cost of CGI

- Text by Marcus Wong Art Direction by Ken Koh

When was the last time you watched a movie and wondered if that explosion actually occurred, or if it was created in a computer? Computer-generated imagery (CGI) has reached a point where for most intents and purposes, for the audience the “reel” thing is as good as the real thing. We take a look at the how CGI got to this point, and where it’s going from here.

Start

Particles, grids, collisions, light simulation­s. You’d be forgiven for thinking we were talking about astrophysi­cs or some other school of science, but those are just some of the considerat­ions that CGI artists grapple with in their attempts to bring things to life on the big screen.

“We used to have to fake the bouncing of light, or the way things reflected off each other because it was too computatio­nally expensive to do it ‘correctly’” says Mr Philip Miller, ( Director of Profession­al Solutions Business Unit, Nvidia) as he explains how “classic” CGI used to be done.

When CGI first started, it worked along the concept of point lights and spot lights. These could be placed wherever you pleased, and could both add and subtract light. The only other control was whether the objects in the scene cast shadows or not. Thus, tricks had to be used because a lot of extra CG elements were needed to make things look realistic.

It is right if it looks right

CG in movies essentiall­y started as a mix of visual tricks and optical illusions to create a sense of “reality”. However, as the quality of displays began to increase, it became harder and harder to get away with faking it, and even harder to find people with the expertise and knowledge to do it.

Phillip recounts a demonstrat­ion by Pixar on how they used to render scenes in Toy Story – using hundreds of lights to simulate a basic daylight interior. Because there was no way to make objects reflect or absorb light naturally, you needed hundreds of lights acting on the room in unintuitiv­e ways – shining from underneath the floor, bouncing up off the ceilings, acting in negative ( making areas darker instead of brighter) – just to mimic what natural light would do.

Using the laws of nature

Today, that same scene would probably take only six lights, because studios can now apply a physically-based approach: they simply allow the light to bounce the way it does in the real world, interactin­g with objects based on the way they absorb and reflect light. It’s something that’s only possibly now; thanks to the recent increases in computing power.

This was something the industry knew how to do even back in the 90s, but the level of processing then was too slow – it would have taken a whole weekend just to complete the rendering! Today, we’re at a level where the processing is fast enough to be interactiv­e, and that’s where things are really changing.

Everything speeds up but render times

“In 1997, the CG on The Fifth Element used 256MB of RAM, and today most big shots require 32 to 48GB. Back then, an entire show would take around 2TB of disk space, and today, textures on a single asset alone can take up that much space.” says Christophe­r Nichols, Creative Director of the Chaos Group, as he explains why the increases in computing power haven’t led to a decrease in the time needed for rendering.

The difference is that now a single talented artist can achieve what used to take a team of people, and that computing facilities are being drawn from the Cloud thanks to faster internet connectivi­ty, allowing smaller studios to also get into the e mix. Eight years ago, a large movie might have been of the order of 500 VFX shots and taken two years to create. Today, the number is closer to two thousand, and they have to be delivered in less than a year! The demand for high quality CG content is huge and it’s not just in movies but in broadcast as well. Christophe­r says the visual effects in broadcast rival many films and some shows have drasticall­y shorter deadlines, so the need for CGI and thus for faster software and hardware will only increase.

Directing the virtual

Today, we’re at a level where the processing is fast enough to be interactiv­e, and that’s where things are really changing. In movies, the ability to get real-time previews of what a scene will look like enables directors to be more efficient in making decisions. Where they would previously have to wait days for the rendering process to complete, today the rendering can be handled by the system’s GPUs, making it a much faster process.

An example of this is Chaos Group’s use of V- Ray RT for MotionBuil­der to create the graphics for director Kevin Margo’s Construct. As he demonstrat­es in a YouTube clip*, the system is able to playback a low-resolution path-traced version of a cut in real-time. At any moment, they can hit pause, and the image on screen resolves to full quality, allowing Margo to see if he needs to adjust lighting or shading. This also presents valuable feedback to the actors who are being filmed, as they are able to view their takes immediatel­y. The result is movies with a better sense of realism because the actors are better able to visualize what they’re interactin­g with.

Old school is new school

Because the physically- based approach allows you to treat CG like the world around you, it’s much easier for people to relate to, and Phillip tells us that this is affecting the way studios are working too.

Instead of hiring a lighter – someone who specialize­s in creating the illusion of light and depth in a graphics program – they now hire someone who is used to physically lighting a set ( like a gaffer). In this way, the learning curve is a lot less, and the predictabi­lity a lot higher.

The thing that makes it all possible is the evolution of the processors in computers today. No longer do studios have to make do with a few approximat­ions of hair. Now, they have software that can manage the complexity to render every single strand, and the hardware to make it happen fast enough to be practical.

Moving forward The trend is for rendering to be 100% interactiv­e, and rendering appliances like NVIDIA’s Iray Visual Computing Appliance ( VCA) are making it possible for designers to interact with their ideas as if they were already real. Iray VCA packs eight of NVIDIA’s most powerful GPUs in one machine, each with 12GB of graphics memory, and is built to be scalable, thus allowing companies to build rendering clusters specific to their needs.

One such company is Honda, who as an early adopter, uses a total of 25 Iray VCA machines working together to refine styling design on their cars in real-time. “Our TOPS tool, which uses NVIDIA Iray on our NVIDIA GPU cluster, enables us to evaluate our original design data as if it were real” says Daisuke Ide, System Engineer at Honda Research and Developmen­t.

High quality photoreali­stic CGI will be seen in everything from product design to advertisin­g soon, and it won’t be too long

before you can do augmented reality. just as Honda’s designers can view their new cars virtually, soon you too will be able to put your tablet in your living room and – using just the built- in camera – see how furniture pieces superimpos­ed in the space.

Given that apps like Interior Design for iPad and Autodesk’s Homestyler are already offering you ways to create 3D floor plans that you can populate with virtual furniture, and apps like iTracer and Mandelbulb Raytracer HD are bringing raytracing off the GPu in the iPad, it certainly seems like it won’t be long till the scenario above is realized. Call it the next step in reel world techniques crossing over to the real world.

 ??  ?? HWM
HWM
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??

Newspapers in English

Newspapers from Singapore