Variety

‘WE KNEW IT WAS GOING TO BE SOMETHING NOBODY HAD SEEN BEFORE.’

-

— Dan Deleeuw

ger doing an official “take” so that they could get additional informatio­n about how he moved and watch him while he experiment­ed with the character in-between shooting. This gave them a better sense of how he moved his eyes and other parts of his face.

In the past, on films including “Beauty and the Beast,” high-res facial captures would be done separately when the actor wasn’t around other performers or on set, so the vfx team wasn’t able to get the same kind of spontaneou­s capture despite doing a higher-res capture of the face overall.

After capturing Brolin on set, the vfx team looked at the machine-generated version of his performanc­e directly next to Brolin’s actual performanc­e. Through careful examinatio­n, more adjustment­s are made and the algorithm is then given more informatio­n that that can be used to “learn” Brolin’s face.

“Josh Brolin is a great collaborat­or,” says Deleeuw. “Thanos is definitely his performanc­e and everything he brought to that role was incredible, and he was really interested in what we were doing and how it was going to take his performanc­e into this new realm. After we saw the first test for Digital Domain, we knew it was going to be some- thing that nobody has ever seen before. We were able to go back and show Josh, and he got this giant smile on his face. He recognized what he put into the performanc­e he can actually see in the CG, and he said, ‘This is the first time I’ve seen what’s in my mind on the screen.’”

While machine learning was able to crush its close-ups in “Avengers: Infinity War,” it’s also being used to take another type of shot out of the dreaded uncanny valley: crowd simulation­s. When you take shots of a smaller group of people and then try to reproduce and randomize those shots to make it appear that a larger crowd is present, the eye can easily pick up movements that look artificial or fake, and even pick up patterns like the color of a shirt that seems to pop up on every 20 people or so. Then the audience is suddenly aware they’re looking at 10 people who’ve been cloned to look like they’re a group of 10,000. Machine learning can make movements seem more natural and more believable, and give animators more time to tweak what they’ve done so it looks more real overall.

Since AI can learn from its previous passes at a take, vfx artists can “teach” it to make realistic faces or environmen­ts that are put together more quickly. Human feedback will still be an invaluable part of the process, but the algorithm will give animators and vfx artists more opportunit­y to experiment with possible looks for any given environmen­t or character.

“Without a doubt, creating Thanos has been one of the most complex things we’ve done, and I don’t know if machine learning and artificial intelligen­ce is going to revolution­ize effects but it will change it,” says Port. “We’re at a point where there’s really nothing that can’t be done in visual effects, but you always have to look at the production time frame and machine learning makes it possible to do more within that time frame.”

Deleeuw agrees. “There’s a lot of time and energy and effort that just goes into getting to the point where you can actually get to the screen and look at something and comment on it,” he says.“and then you have to get to that creative point where you’re spending more time working on a shot, making it look real. These can be challengin­g shots that need some time to try out different solutions. With machine learning you get there faster so you can spend more time creating something.”

Newspapers in English

Newspapers from United States