Tech Re­views

Animation Magazine - - Opportunities -

So, let’s talk about this CrazyTalk An­i­ma­tor 3. In last is­sue’s dis­cus­sion about Per­cep­tion Neu­ron, I men­tioned that 3D mo­tion-cap­ture data could be fed into CTA3 and be ap­plied to 2D char­ac­ters through the bone de­for­ma­tion sys­tem. But that is cer­tainly not the end of CTA3 func­tions.

The bone-based skeletal sys­tem has quite a few ap­pli­ca­tions in a num­ber of dif­fer­ent ar­eas within CTA3. The de­form­ing ca­pa­bil­i­ties al­low you to at­tach bones to any im­ported image to add sub­tle — or not-so-sub­tle — an­i­ma­tion. Get a scan of the Mona Lisa, ex­tract her from the back­ground, bring her into CTA3 and at­tach a se­ries of bones, and then add a lit­tle head bob to some mu­sic.

This bone-driven approach al­lows you to break up char­ac­ters for more com­plex an­i­ma­tions, by break­ing apart limbs and head from body, and mask out the in­flu­ences, so the arms don’t affect the chest, for in­stance.

Then, these bone sys­tems can be saved as tem­plates and you can swap out the char­ac­ters, but use the same bone sys­tems and same an­i­ma­tions be­tween them. So maybe you have five zom­bies. You can an­i­mate one and use the bone setup and an­i­ma­tion as at least a foun­da­tion for the oth­ers. (You wouldn’t want to use ex­actly the same an­i­ma­tion be­cause zom­bies are in­di­vid­u­als, of course.)

In­cluded in CTA3 is a li­brary of hu­man and an­i­mal mo­tions, which can be lay­ered into the time­line as se­quences of an­i­ma­tion that blend into one an­other. You can then take sprites that you’ve built that are com­po­nents of your char­ac­ter, and at­tach them to the cor­re­lat­ing pieces on the tem­plate, in­clud­ing ac­ces­sories like hats, jew­elry, etc.

Fa­cial an­i­ma­tion has been en­hanced with some key au­dio fea­tures, like scrub­bing and text-tospeech tools for sync­ing the au­dio to pho­nemes. But with an added freeform de­for­ma­tion tool, you can add more move­ment into your orig­i­nal sprites to put in some ad­di­tional per­son­al­ity. The fa­cial an­i­ma­tion sys­tem has been ex­panded be­yond hu­man faces to include an­i­mals, too.

There are def­i­nitely more things to find in the CrazyTalk An­i­ma­tor pack­age, in­clud­ing drag-and-drop an­i­ma­tion be­hav­iors and curves, cus­tom­ized FFDs for props, and ex­pres­sion-based an­i­ma­tions. As well as the in­te­gra­tion of 3D mo­tion, in­clud­ing mo­tion-cap­ture data, as men­tioned in con­junc­tion with the Per­cep­tion Neu­ron

But try it out for your­self, or ask clients like Jimmy Kim­mel, HBO, and Keanu Reeves, for starters.

nod. The alSur­face that Arnold lovers cher­ish has been adapted for V-Ray, pri­mar­ily as a com­plex skin shader. And MDL shaders from the NVidia li­brary have been in­cor­po­rated, as well as For­est Color sup­port.

Fur­ther­more, a ton of stuff has been pushed to the GPU for faster pro­cess­ing, in­clud­ing in-buffer lens ef­fects, aerial per­spec­tive, V-Ray clip­per, direc­tional area lights, sto­chas­tic flakes, rounded cor­ners, matte shadow, ren­der mask, ir­ra­di­ance maps, and on-de­mand MIP-map­ping. And they threw in a low GPU thread pri­or­ity for load bal­anc­ing.

Ev­ery­one loves beau­ti­ful ren­ders. But ev­ery­one loves them more when they’re faster!

Al­le­gorith­mic has been go­ing strong ever since it came out of the gate with Sub­stance De­signer and Sub­stance Painter. Tak­ing the game and vis­ual-ef­fects in­dus­try by storm with its PBR approach to tex­ture and shader de­sign, as well as the in­tel­li­gent work­flow for the dy­namic shaders that use the ex­tra maps such as nor­mals, height, oc­clu­sion, etc., to drive how the shader be­haves. These are the “sub­stances.” And Sub­stance De­signer is where they are built.

In Sub­stance De­signer 6.0, Al­le­gorith­mic ap­pears to have found room to make a pow­er­ful piece of soft­ware just that much more pow­er­ful. Among all kinds of pref­er­ences to make user ex­pe­ri­ence bet­ter, and some tweaks un­der the hood to make things faster, there have been a num­ber of new nodes to play with in your Sub­stance script.

A seem­ingly in­nocu­ous but de­ceiv­ingly pow­er­ful ad­di­tion is the curve node. We are all fa­mil­iar with con­trol­ling col­ors and such with curves in Pho­to­shop or Nuke or any num­ber of color grad­ing tools. And in SD6, you can drive color cor­rec­tions or gamma or what­not with bezier nodes on the curve. That’s the bread and but­ter stuff though. Re­mem­ber, in Sub­stance De­signer, you have other map pa­ram­e­ters that can be af­fected, like nor­mals and height. By feed­ing the curves into the height pa­ram­e­ter, you are es­sen­tially defin­ing the equiv­a­lent of a loft pro­file — the curve defin­ing the top sur­face of the ge­om­e­try the sub­stance is at­tached to. Think of it like wain­scot­ing on a wall, or in­tri­cate Ro­coco etch­ing — all with­out the ex­tra ge­om­e­try.

The text node is a sim­i­lar and sim­ple node that al­lows you to add text to the sub­stance (duh!), driven by sys­tem or cus­tom fonts, and fully tileable.

Node can now be in 16-bit or 32-bit float, tak­ing ad­van­tage of high-dy­namic ranges, al­low­ing for in­ter­nal cre­ation and edit­ing of HDR en­vi­ron­ments for light­ing. And you can now bake out tex­tures to 8K! But my fa­vorite is the abil­ity to shoot and process your own sur­face tex­tures. By tak­ing a sam­pling of your ma­te­rial with the lights at dif­fer­ent an­gles, you can, through Sub­stance De­signer 6.0, ex­tract proper nor­mal, height and albedo maps — on top of the color, to get more pre­cise repli­ca­tion of real-world ma­te­rial. Some­thing per­ti­nent to shader de­vel­op­ment both in­side and out­side of the Sub­stance De­signer work­flow.

As I said ear­lier, a su­per strong re­lease to an al­ready su­per strong prod­uct. [ Todd Sheri­dan Perry is a vis­ual-ef­fects su­per­vi­sor and dig­i­tal artist who has worked on fea­tures in­clud­ing The Lord of the Rings: The Two Tow­ers, Speed Racer, 2012, Fi­nal Des­ti­na­tion 5 and Avengers: Age of Ul­tron. You can reach him at todd@tea­spoon­

through New Port City were dubbed “ghost cams.”

“Guil­laume Rocheron used Google Maps to cre­ate a rough ver­sion of the cam­era go­ing through Hong Kong,” says Bon­ami. “Originally, the idea was to have drone footage, but it didn’t work out be­cause of Hong Kong reg­u­la­tions. A team spent nights go­ing up onto rooftops and tak­ing pho­to­graphic footage along the path the cam­era would travel.”

Minia­tures were made of mono­lithic build­ings that were turned into CG ver­sions. “We put in many 3D props,” says Bon­ami. “Once ev­ery­one was happy with the cam­era move, Ru­pert and Guil­laume de­cided where to put the sto­ry­telling solo­grams. Then we could start putting in the sec­ondary ones as well as di­rect­ing crowds, cars, high­ways and street sig­nage.”

Un­like the Ma­jor (Jo­hans­son), Kuze, played by Michael Pitt, is viewed to be a failed ro­botic ex­per­i­ment.

“Ru­pert had us strip the de­sign down in or­der to see more of the in­ter­nals, mus­cles and skele­ton, so to em­pha­size that Kuze mir­rors Scar­lett’s char­ac­ter,” says Bon­ami. “Most of Michael Pitt was re­placed, but we kept his eyes, lips and the sub­tle­ness of his ex­pres­sion.”

Clock­work mech­a­nisms are re­vealed be­neath the shell en­case­ment.

“We cheated the light­ing of the skele­tons by adding some sub­tle rim light­ing,” Bon­ami says. “Guil­laume brought the skele­ton prop (cre­ated by Weta Work­shop) on set, so we had a light­ing re­fer- ence. The prop mus­cles were too much like clear plas­tic, so we added a Gummy Bear feel to make them look more or­ganic.”

Anime Ex­pec­ta­tions Liv­ing up to the anime was an­other chal­lenge, es­pe­cially for some of the best­known scenes. “The shelling se­quence is one of the most iconic se­quences from the anime so there was no pres­sure at all!” Bon­ami says with a laugh. “We wanted to add a pho­to­re­al­is­tic look to it. Ev­ery­thing was shot with the skele­ton prop, which was re­placed with a CG ver­sion for cam­era moves and re­fram­ing. There are still a few shots with the prac­ti­cal skele­ton. We stud­ied with Ru­pert the color bal­ance for the scene. When she comes out of the white liq­uid, we used prac­ti­cal and CG shots.”

The fluid sim­u­la­tions were com­pli­cated by hav­ing to match the live-ac­tion footage. “The pre­vis gave us a good ref­er­ence, but we wanted to have skin scat­ter and look slightly trans­par­ent as well as to get the right el­e­va­tion to­wards cam­era, so that fram­ing worked with all of the shots,” he says.

In an­other key se­quence, Ma­jor leaps off of a high-rise build­ing. “We had great ref­er­ence of a stunt per­son wear­ing the ther­mop­tic suit jump­ing off of the roof,” says Bon­ami, who had to make the in­vis­i­ble Ma­jor vis­i­ble for the viewer. “The ther­mop­tic suit doesn’t al­ways work well, so that is when the high-tech out­fit re­veals how it works. How­ever, you still have

“Yeah, there were some days when we just re­al­ized the pro­duc­tion was go­ing to be shut down be­cause if we went out­side in all of that wind, it was go­ing to be bad — very, very bad,” says Was­sel, a vet­eran of The Fast and the Fu­ri­ous, Fast Five and Fu­ri­ous 7.

Win­dow Shots Al­though not as ex­otic or un­usual as Ice­land, Cleve­land played a sig­nif­i­cant role in F8 by sub­bing for New York, where film­ing is pretty dif­fi­cult if you want to stage a high-speed chase with a half dozen high-pow­ered cars. Over the course of three weeks, crews used city streets and pushed cars out of win­dows for one of the more mem­o­rable se­quences in the film.

This time again, the se­cret sauce was a mix- ture of real cars be­ing pushed out of the win­dows of a build­ing in ad­di­tion to CG cars based on the in­for­ma­tion gained by shoot­ing the real ones fall­ing.

“There were some cars that were weighted to fall a cer­tain way to get a spe­cific look to the fall, and there were oth­ers that we ad­justed for other kinds of im­pacts,” says McIl­wain. “A tremen­dous amount of thought and plan­ning goes into de­stroy­ing the cars, but not all of them are just thrown away at the end, since it makes more sense to re­cy­cle or re­use them when you can, even if it means hav­ing to fix them up a bit af­ter you’ve tossed them around.”

In one of the more chill­ing scenes, Cipher, the film’s cy­ber vil­lain, un­leashes a squad of self-driv­ing cars that have been hacked. The film­mak­ers re­fined this plot point be­fore fears of ac­tual hack­ing into car com­puter sys­tems be­came part of the head­lines, and now it seems al­most pre­scient. Here the cars were both real and dig­i­tal again and shot on the streets of Cleve­land with­out any of the A-list cast on set. Later on, the VFX team merged the footage of the cars and the ac­tors to bring to­gether the edgy se­ries of events.

“It’s re­ally all about imag­in­ing some­thing fun that will make it worth­while for the peo­ple who come to see these movies so they keep com­ing back to see them when a new one is made,” says Was­sel. [ Karen Idel­son is an en­ter­tain­ment tech­nol­ogy writer whose dog wasn’t too happy about stay­ing in­side while she wrote this ar­ti­cle.

there goes six months of my life,’” says Jar­rett, with a laugh.

There were sim­i­lar re­place­ments done for the Laura char­ac­ter played by Dafne Keen and the stunt­women who dou­bled her on screen. Com­plex scans were done of Keen and each stunt per­former so that they could match the im­ages in ac­tion se­quences so that they could be­come as be­liev­able as pos­si­ble. As time-con­sum­ing as the process can be, you can’t ar­gue with the re­sults. Early ef­fects to do fa­cial, head and neck re­place­ment done in films like Ti­tanic yielded mixed re­sults at best. Since then, the process has be­come more re­fined and film­mak­ers have learned from past at­tempts to cre­ate this il­lu­sion.

“What I found was that we’d solved most of the tech­no­log­i­cal is­sues, re­ally,” says Jar­rett. “But what we re­ally needed was lots and lots of time to noo­dle with our shots be­cause we’re used to see­ing faces and we know what they should look like. And then, if you’re deal­ing with Hugh’s face, then you’ve got all that bag­gage on top of that as well, so you’re go­ing to have to spend time with light­ing and other things to bring the face to the point where it looks real to the au­di­ence, and their pre­con­ceived ideas about how Hugh should look play into all of that.” Time Man­age­ment Vis­ual-ef­fects artist of­ten had to work through the de­tails, mak­ing sub­tle and even un­ex­pected tweaks that make the re­place­ments seam­less. For Jar­rett, who was on the film for about 15 months, time and craft be­came key.

Jar­rett, who is also a vet­eran of Harry Pot­ter and the Sor­cerer’s Stone and Sweeney Todd: The De­mon Bar­ber of Fleet Street, cred­its Man­gold with giv­ing the film its unique look and imag­in­ing vis­ual ef­fects that worked with a pared-down — while still imag­i­na­tive look — for Lo­gan. Man­gold him­self has men­tioned the vis­ual ref­er­ence points of films such as The Wrestler and clas­sic films such as Shane, The Cow­boys and The Gaunt­let when de­sign­ing his approach to the story.

“I learned an aw­ful lot from Jim (Man­gold) in terms of ac­tion and sto­ry­telling,” says Jar­rett. “De­vel­op­ing the scene where there was a chase to the train was just a re­ally fun se­quence to work on and Jim’s ideas about how (Lo­gan) shouldn’t just crash through the fence and ev­ery­thing (Lo­gan) does goes wrong and he has to think of an­other way to do things, that’s es­sen­tially every beat of the scene. He al­ways wanted to find a way to make it more fun and more about the char­ac­ter.” [ Karen Idel­son is an en­ter­tain­ment tech­nol­ogy writer and for­mer vis­ual-ef­fects artist who lives in the South Bay.

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.