The New Gold Rush Is Now

Animation Magazine - - Opportunities - By Martin Gre­bing

With the old bar­ri­ers to cre­ator own­er­ship and distri­bu­tion gone, there’s no rea­son not to cre­ate your own stuff — and find an au­di­ence for it.

Once upon a time, it was all but im­pos­si­ble for in­de­pen­dent an­i­ma­tors and film­mak­ers to have their work seen by the masses. And the con­cept of an in­de­pen­dent pro­ducer ac­tu­ally mak­ing a liv­ing, much less a hefty profit, off their cre­ative con­tent was lit­tle more than a pipe dream.

First, you had to beg, bor­row and steal in hopes of rais­ing enough money, sweat eq­uity and vol­un­teerism to see your project come to com­ple­tion. Only to then be re­quired to fig­ure out how to get your work in front of a com­pe­tent agent, pro­ducer or dis­trib­u­tor with the dis­tant hope that they might con­sider your project for as­sim­i­la­tion into their pipe­lines, con­trol and own­er­ship. More of­ten than not, your project would be­come the prop­erty of said me­dia giant and you would be left to your own de­vices. The fi­nan­cial re­wards for your blood, sweat and tears lay al­most en­tirely in the hands of oth­ers. Sadly, upon ex­e­cut­ing con­tract, most in­de­pen­dent pro­duc­ers would be cast aside, hold­ing an empty bag with nary a penny in sight. Or, sar­don­ically enough, even ow­ing large amounts of money to the dis­trib­u­tor.

Much to the chagrin of old-school me­dia and en­ter­tain­ment con­glom­er­ates, and to the long over­due boon of the in­de­pen­dent artist, ev­ery­thing has changed.

With the cur­rent boom of In­ter­net and stream­ing-based con­tent, the ma­jor cor­po­ra­tions have lost some of their con­trol over broad­cast­ing and mon­e­tiz­ing your me­dia. At one point, the en­ter­tain­ment in­dus­try was an exclusive club, en­tirely con­trolled by ex­ec­u­tive elit­ists and their un­breach­able gate­keep­ers. While this par­a­digm still re­mains in cer­tain ways, there are ex­po­nen­tially more venues for you to broad­cast your work and — gasp! — even make money from your ef­forts. You now have the power to be­come your own pro­ducer, broad­caster, dis­trib­u­tor, mar­keter and mer­chan­diser all in one.

A quick in­ter­net search of “best video shar­ing sites” will give you dozens upon dozens of web­sites where you can show­case and pro­mote your work, i.e., distri­bu­tion chan­nels. Once you have eyes and ears on your work, the rest is up to you. And with the ease and con­ve­nience of earn­ing money and sell­ing things on­line, the sky’s the limit. Mon­e­tiz­ing can be as sim­ple as click­ing the “Mon­e­tize this video” link on the video host­ing ser­vice of your choice. You can even link back to your own web­site or on­line stores where you sell T-shirts, DVDs, posters, apps and a host of other swag.

Count­less in­de­pen­dent pro­duc­ers, even chil­dren, are mak­ing millions upon millions of dol­lars via free, on­line video shar­ing sites by creat­ing and broad­cast­ing their own con­tent. The for­mula is sim­ple: pro­duce con­tent, post it on­line for free, pro­mote your con­tent for free, and then Count de Monet.

But don’t let tech­nol­ogy’s ease of broad­cast­ing your work across dozens of video shar­ing plat­forms pla­cate your grit or lull you into a false sense of se­cu­rity. Just be­cause you can share your work with millions of peo­ple doesn’t mean millions of peo­ple will watch. No mat­ter how much tech­nol­ogy is avail­able or how mar­ket­ing-savvy you be­come (from read­ing Au­ton­o­mous An­i­ma­tor ar­ti­cles, no doubt), it all boils down to one foun­da­tional re­quire­ment for mak­ing money from your cre­ative con­tent: You have to pro­duce some­thing that peo­ple want to watch. It doesn’t mat­ter how much money you spend on pro­duc­tion, how deep, mean­ing­ful and in­tro­spec­tive your art­house project is to you and your best friend, or even how many years you la­bored to see your pas­sion project come to life. When it comes to mak­ing money from your work, the only thing that mat­ters is the num­ber of eyes and ears that want to see and hear your stuff.

At the risk of of­fend­ing cer­tain artis­tic sen­si­bil­i­ties, this cold, hard fact has al­ways ex­isted and quite pos­si­bly al­ways will: money is the lifeblood of busi­ness. For ex­am­ple, if it cost your arm and leg to pro­duce your first fea­ture film, you’re go­ing to need a hefty re­turn to pay back your in­vestors and re­cover your limbs, much less pro­duce a se­quel. If you failed to make your first ef­fort per­form fi­nan­cially, what ra­tio­nal in­vestor would con­sider pro­vid­ing fund­ing for your fu­ture en­deav­ors? If the con­cept of ac­quir­ing mas­sive amounts of money doesn’t sit right with you, feel free to do­nate any­thing above and be­yond your ba­sic cost of liv­ing to your fa­vorite char­i­ties. If noth­ing else, look at money as a means to keep pro­duc­ing your pas­sion projects while main­tain­ing your de­sired qual­ity of life.

The new gold rush is here. And like all great rushes of yore, it’s only a mat­ter of time be­fore it runs its course. So act now or for­ever be left in the dust — or at least un­til the next one comes around. [ Martin Gre­bing is a mul­ti­ple-award-win­ning an­i­ma­tion pro­ducer, small-busi­ness con­sul­tant and pres­i­dent of Fun­ny­bone An­i­ma­tion. Reach him at www. fun­ny­bonean­i­ma­tion. com.

LDirec­tor Jon Favreau and his crew go for emo­tion and hu­mor with ex­ten­sive VFX and mo-cap

in Dis­ney’s hy­brid up­date of By Bill De­sowitz.

ike J.J. Abrams with Star Wars: The Force Awak­ens, di­rec­tor Jon Favreau ap­proached his pho­to­re­al­is­tic re­make of Dis­ney’s 1967 an­i­mated clas­sic The Jun­gle Book from both a child’s and an adult’s per­spec­tive.

“You’re try­ing to honor the emo­tional mem­ory, the per­ceived mem­ory of peo­ple who grew up with this stuff,” he says. “But you’re also try­ing to make a movie that ap­peals to the full au­di­ence. That’s re­ally what (Walt) Dis­ney set out to do. I stuck with the ’67 story struc­ture but fo­cused on images that I re­mem­bered be­fore watch­ing it again.”

That’s a trick Favreau learned as di­rec­tor on Iron Man: It’s not nec­es­sar­ily what’s in the ma­te­rial that’s most im­por­tant — it’s what you re­mem­ber. And so he keyed off of the col­lec­tive mem­ory of those iconic images.

Usu­ally, this high level of tech and artistry is re­served for big ac­tion spec­ta­cles, but Favreau em­pha­sized that The Jun­gle Book, orig­i­nally adapted from the book by Rud­yard Ki­pling, was “a unique op­por­tu­nity to use it for hu­mor and emo­tion and show­ing na­ture and show­ing an­i­mals. And get­ting into that real deep, mythic im­agery that, I think, al­ways mar­ries well with tech­nol­ogy.”

In­deed, it’s the most tech-savvy project the di­rec­tor has ever em­braced. Tak­ing his pho­to­real cue from the Os­car-win­ning Grav­ity, where you had a tough time de­ter­min­ing what was live ac­tion and what was an­i­ma­tion, Favreau went for a com­bi­na­tion of mo­cap and CG an­i­ma­tion, with new­comer Neel Sethi as the only live ac­tor, play­ing Mowgli.

He’s raised by In­dian wolves Rak­sha (Lupita Ny­ong’o) and Akela (Gian­carlo Es­pos­ito). When the fear­some scarred Ben­gal tiger Shere Khan (Idris Elba) threat­ens to kill Mowgli, he leaves his jun­gle home, guided by Bagheera, the friendly black pan­ther (Ben Kings­ley), and Baloo, a free-spir­ited bear (Bill Mur­ray).

Along the way, Mowgli en­coun­ters the hyp­notic python, Kaa (Scar­lett Jo­hans­son), and the smooth-talk­ing Gi­gan­to­p­ithe­cus, King Louie (Christo­pher Walken).

“The two big­gest chal­lenges were how to seam­lessly in­te­grate the live-ac­tion boy and be­liev­ably get the an­i­mals to talk,” Favreau says. “We looked at an­i­mal be­hav­ior on­line for ref­er­ence and would some­times ex­ag­ger­ate the en­vi­ron­ment or scale for ef­fect. Dogs or wolves are very ex­pres­sive with eye­brows but not with their mouths; cats don’t use their eye­brows; bears use their lips and eye­brows. Each an­i­mal pro­vided a dif­fer­ent set of tools to use.”

As­sem­bling the Crew Favreau turned to Os­car-win­ning VFX su­per­vi­sor Rob Legato ( Hugo, Ti­tanic) to spear­head the movie in col­lab­o­ra­tion with MPC, which did the ma­jor­ity of CG char­ac­ters and en­vi­ron­ments; and Weta Dig­i­tal, which han­dled King Louie and the other pri­mates — not sur­pris­ing, given its King Kong and Planet of the Apes pedi­gree.

Legato was thrilled to use the best that vir­tual pro­duc­tion has to of­fer with some new tech wrin­kles to work more quickly, ef­fi­ciently and be­liev­ably, as though they were shoot­ing a live-ac­tion movie.

“What we were try­ing to do is re­mind you

that ev­ery­thing is real and to get lost in the per­for­mances and story,” Legato says. “The artis­tic choices that you make in a live en­vi­ron­ment are based on the in­stincts and ex­pe­ri­ences and film­mak­ing skills that you’ve honed.”

Thus, you have to give very spe­cific in­struc­tions to the an­i­ma­tors about cam­era place­ment so that it all fits co­he­sively and or­gan­i­cally.

“What I’ve been push­ing for since The Avi­a­tor are tools that al­low me to be­have the way I want on the set, be­cause I’m used to do­ing ana­log work, live-ac­tion work. I’m not sure what the an­gle of the shot is un­til I see it. And you try things out un­til it sings and then you know that’s the shot. It takes three or four takes to do that, but an­i­ma­tion is very pre­cise.”

A Vir­tual Stage The Jun­gle Book was shot by cin­e­matog­ra­pher Bill Pope on two stages. Su­per­vis­ing art di­rec­tor Andrew Jones could wheel onto one stage a set to shoot on, and then prep an­other set on the other stage.

“We had a mo­tion-cap­ture vol­ume, we had ac­tors play­ing the parts, we had suits, we had sets that were lined up with what the dig­i­tal set looked like. And then we cap­tured it,” Favreau says. “First, we had an an­i­matic ver­sion, as you would on an an­i­mated film, then a mo­tion-cap­ture ver­sion that we edited, and then, fi­nally, we took that and shot the kid as though he were an el­e­ment.”

“Jon talked about how our jun­gle was the stage for pri­mal mythol­ogy,” says MPC VFX su­per­vi­sor Adam Valdez. “He also saw the op­por­tu­nity to give au­di­ences the wish ful­fill­ment of liv­ing with an­i­mals, and for that the world and char­ac­ters needed to pass the test of un­blink­ing be­liev­abil­ity. We had to cre­ate an ex­pe­ri­ence that was charm­ing like the clas­sic an­i­mated film, but in­tense when the story needed it.”

They made use of cer­tain re­fine­ments since Avatar that are mostly ease of use, “where it be­comes eas­ier and faster to do, a lit­tle more real-time. But the work­flow was to get a scene on its feet right away,” Legato said.

Rather than shoot­ing all of the celebri­ties on stage to­gether with the young ac­tor, they shot their voice work sep­a­rately and used pup­peteers as stand-ins with the boy. This was a more tra­di­tional ap­proach to ac­com­mo­date Favreau’s com­fort zone.

“We pre­vised and cap­tured the movie at the same time be­cause we were cap­tur­ing the shot, what’s in the shot and then the cam­era cov­er­age of it and that got edited,” Legato says. “Now we had the ana­log free­dom to just choose when we cut to the close-up, and we picked it like we nor­mally do in live ac­tion. That be­came the blue­print that we were go­ing to bring to the blue screen stage to recre­ate specif­i­cally that shot. And we knew with great au­thor­ity that it would fit into the whole be­cause we’d al­ready seen it edited to­gether in pre­vis.”

Not Play­ing Around “The in­no­va­tions were a thing called Pho­ton, which makes the Mo­tion­builder game ver­sion of the scene a lit­tle closer to the way we wanted it and the tex­tures are a lit­tle more re­al­is­tic,” Legato says. “It’s still game-en­gine qual­ity, but it gives the artist a bet­ter clue of what it’s ul­ti­mately go­ing to look like. And then we did some other in­no­va­tions when we were do­ing the pre­vis, and when we were shoot­ing and how to evolve mo­tion con­trol. We made this Favreau­a­tor thing, which is a de­vice you could pro­gram sub­tle, sec­ondary mus­cle move­ments, so when the kid sits on the bear, the an­i­ma­tors cre­ated a sad­dle that moved and was ac­tu­ated by the ac­tual an­i­ma­tion of what it was ul­ti­mately go­ing to be. So that when you drop the kid into the scene on top of the bear, it’s much more re­al­is­tic be­cause what’s driv­ing him is the mus­cu­la­ture of the an­i­mal un­der­neath him.

“We shot on a 40-foot turntable for a walk and talk. The key light source was a pro­jec­tor, and there’s a tech­nique of say­ing if the pro­jec­tor was also a cam­era, what­ever’s in front of it at any one time is go­ing to shadow the per­son as if he’s walk­ing past trees and var­i­ous (ob­jects in the jun­gle). And the turntable moved our com­puter pro­gram, which tells the pro­jec­tor to print in the pat­tern of the light source. And when you put it all to­gether and the kid’s walk­ing up and down on this hilly thing, it looks like he’s on solid ground way be­yond our stage floor and op­ti­mally lit by the sun.”

In con­clu­sion, Legato of­fered: “It’s ex­cit­ing for me be­cause it bodes well for the fu­ture to cre­ate any­thing, and not just for movies that are larger than life about su­per­heroes and de­struc­tion. Af­ter The Revenant, I think we will be good pub­lic re­la­tions for bears.” Bill De­sowitz is crafts edi­tor of Indiewire ( and the au­thor of James Bond Un­masked (www.james­bon­dun­

Dead­pool was the per­fect ve­hi­cle for Blur Stu­dio’s Tim Miller to make his di­rec­to­rial fea­ture de­but with. It’s the R-rated Mar­vel movie Dis­ney would never make and em­bod­ies Miller’s mav­er­ick ap­proach to film­mak­ing. (Nat­u­rally, Blur made sev­eral con­tri­bu­tions, in­clud­ing the funky an­i­mated clos­ing ti­tle se­quence).

Ar­mored mu­tant Colos­sus proved to be a great foil for Ryan Reynolds’ snarky anti-hero, and re­quired a com­pli­cated bit of an­i­ma­tion by Dig­i­tal Do­main to pull off the 7-foot-tall or­ganic-steel giant. Colos­sus was Franken­steined to­gether with the help of voice ac­tor Stefan Kapi­cic, mo­tion cap­ture per­former An­drei Tri­co­teux for fight­ing, ac­tor T.J. Storm for reg­u­lar body mo­tion, ac­tor/stunt per­former Glenn En­nis for ini­tial fa­cial shapes, and mo­cap su­per­vi­sor Greg LaSalle for fi­nal fa­cial per­for­mance.

“Tim wanted Colos­sus to be por­trayed dif­fer­ently than in the X-Men movies. As a nerd, he wanted a re­turn to the comic-book look: a big­ger body builder type who’s Rus­sian. But he also wanted pho­to­re­al­ism,” says DD’s VFX su­per­vi­sor Alex Wang, who col­lab­o­rated with pro­duc­tion VFX su­per­vi­sor Jonathan Roth­bart.

“For the body, we looked at Arnold Sch­warzeneg­ger dur­ing his body build­ing days, but we wanted him to be much more ath­letic so we also looked at football player builds: how long their mus­cles had to be in or­der for Colos­sus to real­is­ti­cally do the move­ments.

“For the face, we looked at very chis­eled and pro­nounced fa­cial fea­tures. But more and more, Tim wanted his face to be based on some­body. But it was hard find­ing an ac­tor that he liked and, at the very last minute, we found that he liked the stunt­man on set, Glenn En­nis, for his fa­cial fea­tures.”

Mu­tant Ex­pres­sions Miller was par­tic­u­larly keen on us­ing the Mova fa­cial-cap­ture sys­tem that DD in­tro­duced in the Os­car-win­ning The Cu­ri­ous Case of Ben­jamin But­ton. Turns out that LaSalle, who now works for DD, was a re­cip­i­ent of Mova’s Academy Sci-Tech Award a cou­ple of years ago. Miller turned to him to give the cru­cial face sync to au­dio af­ter an­other ac­tor fell through. LaSalle got to per­form Colos­sus all alone with live-ac­tion plates as ref­er­ence.

“Tim di­rected Greg and, us­ing our di­rect drive sys­tem, we would then re-tar­get the ac­tor on to the Colos­sus,” added Wang.

At the same time, DD pushed its mus­cle sys­tem to have greater con­trol of the move­ment, be­cause mus­cle and skin slid­ing tends to be all over the place. “And so we needed to find a way of us­ing our skin sim­u­la­tion to art di­rect where those lines go,” says an­i­ma­tion di­rec­tor Jan Philip Cramer. “Ob­vi­ously, it’s metal and it can’t look like it’s stretch­ing, but we had to find ways to com­pen­sate for nat­u­ral skin slide that would look right.”

For the metal­lic fin­ish, DD used cold-rolled steel as ref­er­ence for the body and hot-rolled steel for his hair. How­ever, the ridges and lines proved trou­ble­some, so DD tweaked Hou­dini soft­ware for plac­ing them in tar­geted po­si­tions around his body (ren­dered pro­ce­du­rally in V-Ray).

A De­cay­ing Hero Mean­while, Rodeo FX com­pleted un­der the su­per­vi­sion of Wayne Brin­ton close to 230 shots for Dead­pool, which re­quired fire and em­bers, grotesque skin al­ter­ations, and set ex­ten­sions.

The mu­ta­tion in­tro­duced into Reynolds’ body changes the struc­ture of his skin and, once he be­comes Dead­pool, he’s hideous to look at with­out his tight red Span­dex mask. Brin­ton and his team did con­cepts for skin de­com­po­si­tion at dif­fer­ent stages, us­ing time-lapse pho­tog­ra­phy of rot­ting veg­eta­bles and meat for in­spi­ra­tion. They found that the pro­duc­tion plates were too dark to show the sub­tleties of what they wanted to do, so they added more de­tail and shape to the skin, mod­el­ing with ZBrush, do­ing light­ing passes, and fi­nally com­posit­ing in the tex­tures.

This scene was shot con­tin­u­ally in one room that had been fit­ted with gas pipes emit­ting flames, mak­ing the usual prac­tice of sub­mit­ting in­di­vid­ual shots for ap­proval in­ef­fi­cient and awk­ward. In­stead, Rodeo FX asked to sub­mit the fin­ished se­quence in its en­tirety to Roth­bart.

The other main se­quence that Rodeo FX worked on was a post-dis­as­ter scene in which a ship crashes, creat­ing a junk­yard of smol­der­ing parts. The scene was shot against a green screen and then Rodeo FX gen­er­ated set ex­ten­sions for the junk­yard, com­pos­ited a matte painting that Blur Stu­dio shared with them, and added smoke and ashes. Rodeo FX pro­duced ad­di­tional matte paint­ings based on photos of the set taken dur­ing pro­duc­tion. The stu­dio added lots of smoke and ashes at the be­gin­ning of the scene when ev­ery­thing is crum­bling down, then re­duced the in­ten­sity as the scene pro­gressed.

“We aimed for a chore­og­ra­phy of sim­u­lated ash, fall­ing in 3D space,” says Martin Lip­mann, com­posit­ing su­per­vi­sor at Rodeo FX. “It’s seem­ingly mi­nor el­e­ments like this that en­sure the con­ti­nu­ity and be­liev­abil­ity of a scene like this.” [ Bill De­sowitz is crafts edi­tor of Indiewire ( and the au­thor of James Bond Un­masked (www.james­bon­dun­ he HP Z se­ries of work­sta­tions con­tin­ues to bring sub­stan­tial power through hard­ware, firmware and soft­ware up­dates — even at the en­try-level work­sta­tions. While I’m a fan of the 800s be­cause I am usu­ally do­ing pretty ro­bust tasks in vis­ual ef­fects, the 200s should not be ig­nored as a vi­able op­tion — es­pe­cially as an in­tro­duc­tory ma­chine, or for those artists who don’t need all that horse­power. An­i­ma­tors come to mind; or track­ing and roto artists.

My re­view sys­tem was the Z240 SFF (Small Form Fac­tor) con­fig­u­ra­tion, which is nearly half the size of its sib­ling work­sta­tion model, made to sit on your desk rather than un­der it, but it still packs a lot of punch.

The quad-core pro­ces­sor is the step up from Haswell to Sky­lake at 3.5Ghz, but that’s not re­ally the pri­mary source of the speed. That comes from the ex­panded NVMe PCIe SSD slots that an HP Z Turbo Drive G2 can be put in, pro­vid­ing ex­tremely fast data ac­cess in com­par­i­son to the typ­i­cal SATA drives. This is crit­i­cal for re­triev­ing large data sets like par­ti­cles in fluid sims, or sim­ply long im­age se­quences. But with a po­ten­tial of 64GB of RAM in the 4 UDIMM slots, you can throw quite a bit at the ma­chine with­out tak­ing it down.

Graph­ics are driven by ei­ther NVidia or AMD. My ma­chine sports an NVidia 1200 with 4GB of VRAM, which is pretty beefy. I do pretty beefy stuff. Lower cost mod­els would have a Fire­Pro W2100 or an NVidia K420 or K640, which should pro­vide more than enough pixel power for most artists. But, if you are us­ing GPU ac­cel­er­ated com­posit­ing or 3D stuff, I’d rec­om­mend go­ing for broke.

With all this power, you’d think that the box would be jet-en­gine noisy, but be­cause HP is al­ways look­ing for a bal­ance of power and en­ergy con­ser­va­tion, there is an ef­fort to re­duce heat, which re­duces the work­load on the cool­ing fans, mak­ing qui­eter ma­chines. That, and the case de­sign does a great job of keep­ing things pretty whis­pery.

For in­di­vid­u­als, this is a great en­try sys­tem as a pow­er­ful enough work­sta­tion to get most an­i­ma­tion, art and vis­ual ef­fects tasks done — es­pe­cially if you boost it up with some RAM and a Turbo Drive. But for stu­dios, you could pop­u­late an en­tire roto or track­ing depart­ment with a fleet of these ma­chines at a frac­tion of the cost of the Z840s — which are great ma­chines, but po­ten­tial overkill.

TChaos The­ory VRs­cans

www.vrs­ he idea of creat­ing pho­to­re­al­is­tic shaders from scratch is daunt­ing … for any ren­der en­gine. There may be re- pos­i­to­ries and li­braries of pre-built shaders that you can start from, but those never re­ally work out of the box, and could re­quire hours of tweak­ing to get even an ap­prox­i­ma­tion of the orig­i­nal sur­face.

Well, the de­vel­op­ers over at Chaos The­ory — the guys who brought us V-Ray — have been work­ing for the past cou­ple of years on a scan­ner that records not only dif­fuse color data, but re­flectance and glossi­ness as well. The in­for­ma­tion is saved into a BTF (Bidi­rec­tional Texture Func­tion) which can be used within V-Ray 3.3 as a VRs­can ma­te­rial, which is dif­fer­ent from the more tra­di­tional BRDF func­tions that other shader sys­tems use (in­clud­ing V-Ray’s reg­u­lar shader). Since all these com­po­nents work to­gether to gen­er­ate what we per­ceive as a “leather”, or “satin” or what­not, the scan brings you close to pho­to­real, and you can be­gin tweak­ing from there.

The whole idea is sim­i­lar to Quixel’s Me­gas­cans. But the dif­fer­ence is that Me­gas­cans feed into map chan­nels of stan­dard ren­derer shaders — which you still need to dial in, once ap­plied. The VRs­cans shader in­cor­po­rates the val­ues into the shader it­self, which can then be used as a base­line ref­er­ence for typ­i­cal shader dev, or if you want to in­cor­po­rate it into some­thing like a game en­gine.

The ap­proach is great when you have to match sur­faces to ones cap­tured pho­to­graph­i­cally. But it’s also amaz­ing for in­dus­try out­side of en­ter­tain­ment (as if) — like fab­ri­ca­tion, where you are try­ing to pro­to­type prod­ucts be­fore you make the in­vest­ment in ac­tu­ally pur­chas­ing the raw ma­te­ri­als to build it. Real-world scans will al­low you to vi­su­al­ize that stuff with con­fi­dence be­fore mak­ing costly de­ci­sions.

De­spite the devel­op­ment time, the tech was just re­leased and is start­ing to get trac­tion, both as a po­ten­tial sub­scrip­tion ser­vice with ac­cess to a build­ing li­brary, as well as a spe­cific scan­ning ser­vice where clients can send in project-spe­cific ma­te­ri­als to be scanned. The process is lim­ited to opaque hard-sur­faces. So, no skin or glass, or any­thing like that. But this is a pretty amaz­ing start. Glyph Soft­ware Mat­tepaint­ing Toolkit One com­po­nent of vis­ual ef­fects that doesn’t re­ally get much love, tech­ni­cally-speak­ing, is matte paint­ings. The tech­nique it­self is one of the old­est in the book, start­ing with set painting from Ge­orge Méliès around 1900. Wil­lis O’Brien used them in King Kong 80-some years ago. Al­bert Whit­lock was fre­quently hired by Hitch­cock. But back then, the artists would paint on glass, and it would be pho­tographed with ei­ther a piece painted black in front of it to gen­er­ate a matte, or the paint would be scraped away and the live ac­tion would be shot through the matte painting, cap­tur­ing it all in one pass.

Then along came dig­i­tal painting. And af­ter that, we could project paint­ings onto ge­om­e­try. And then, ev­ery­one was all like, “Send it to DMP — they’ll fix it” (DMP = Dig­i­tal Matte Painting). So, with the high de­mand for such things, it be­came nec­es­sary to have tools to man­age it all. Tra­di­tion­ally (in dig­i­tal terms), you have a matte painting that it sup­posed to be viewed from one cam­era an­gle, pro­jected onto ge­om­e­try like a pro­jec­tor. A build­ing in a city that has a bunch of dam­age, for ex­am­ple. If you move to the side and re­veal the other wall, then the painting doesn’t work any­more, and you have to make an­other painting from the new an­gle. But that painting doesn’t work from the first po­si­tion, so you need to blend the two with a mask. Now imag­ine that there are fifty build­ings. This is where Glyph comes in.

Glyph Soft­ware’s Mat­tepaint­ing Toolkit (gs_mptk) is a sim­ple but pow­er­ful tool that cre­ates lay­ered shaders, al­lows you to man­age the tex­tures (a.k.a. paint­ings) for each layer (up to 16) which are each tied to pro­jec­tion cam­eras, and then con­trol the ge­om­e­try that the shader is at­tached to. And it uses View­port 2.0 in Maya to dis­play the paint­ings in the con­text of the shot. And on top of that, it has a toolset that makes manag­ing ev­ery­thing eas­ier.

For in­stance, you can gen­er­ate occlusion and cov­er­age maps. The cov­er­age maps show what parts of the ob­jects in the scene the shot cam­era is see­ing from the be­gin­ning to the end of the shot, re­veal­ing to the matte painter where the painting ends, hence avoid­ing un­nec­es­sary work.

Then there are mat­tes in many dif­fer­ent fla­vors, which are used to blend the dif­fer­ent pro­jec­tions. Shadow Oc­clu­sions will turn the pro­jec­tion cam­era into a light, os­ten­si­bly, and what­ever ge­om­e­try is not “il­lu­mi­nated” will re­veal the next pro­jected painting down in the lay­ered shader — which is a dif­fer­ent pro­jec­tion from a dif­fer­ent cam­era. Fac­ing Ra­tio does a sim­i­lar thing, but fades the mask the fur­ther away the faces of the ge­om­e­try turn away from the cam­era. And fi­nally, you can go old school and ex­plic­itly paint the ar­eas that you want to blend us­ing Maya’s in­ter­nal paint tools. And once you are done, you can bake down the tex­tures to the orig­i­nal UV maps on the ob­jects.

This is the core func­tion­al­ity of Glyph’s Toolkit ... but it doesn’t stop there. You can also im­port point clouds gen­er­ated from pho­togram­me­try soft­ware like Pho­to­scan and Pho­to­synth.

For matte painters, this tool is a must. If I were to quib­ble, I would love the texture bak­ing to uti­lize UDIM UV space — for fea­ture film FX, the tra­di­tional 0-1 UV space just doesn’t cut it any­more. But maybe we’ll see that in fu­ture ver­sions. [

lously de­pict how Kyuta forms his iden­tity. That’s why I made him a char­ac­ter with emo­tional tur­moil in his heart.”

As Kyuta and Ku­matetsu spar and train, they of­ten trade roles as stu­dent and teacher. Over the course of eight years, Kyuta grows strong and adept; Ku­matetsu be­comes more dis­ci­plined and thought­ful. Stu­dent Be­comes Teacher “I think that par­ents and teach­ers have his­tor­i­cally taken a ‘top-down’ ap­proach to rais­ing chil­dren, but these days I sus­pect it’s be­come more mu­tual growth,” Hosoda con­tin­ues. “Par­ents and teach­ers today can be con­sid­ered im­per­fect; they and their chil­dren need to ma­ture to­gether. I used the re­la­tion­ship be­tween Ku­matetsu and Kyuta to ex­press my wish for chil­dren to en­counter dif­fer­ent peo­ple whom they can call their ‘teach­ers of choice,’ peo­ple who help them ma­ture into adults. Si­mul­ta­ne­ously, I wanted to show adults how won­der­ful it is we don’t have to just look back on those by­gone days when we were ‘grow­ing up’ — we can keep on grow­ing. It may be im- tects from bul­ly­ing class­mates. In re­turn, she helps him read Moby-Dick. As Ku­matetsu squares off against Iozen ( Sean Hen­ni­gan), his ri­val for the throne of the Jun­ten­gai, Kyuta con­fronts Iozen’s son Ichi­ro­hiko (Austin Tin­dle) — and the darker side of his own na­ture. Their cli­mac­tic bat­tle evokes Moby-Dick in a spec­tac­u­lar com­bi­na­tion of drawn an­i­ma­tion and CG. Ichi­ro­hiko takes the form of the great white whale, mov­ing like a shadow through neon-drenched Shibuya and against the starry night sky.

Hosoda, who read Melville as an ado­les­cent, ex­plains: “Kaede says, ‘Ichi­ro­hiko is fight­ing the very dark­ness — the “beast” — within him­self.’ I cited Moby-Dick in the film to show that it’s hu­mans who are beast­like, and beasts who are hu­mane. The whale is a symbol of hu­man de­sire, so it’s highly sym­bolic for a whale to swim through Shibuya, a hu­man city steeped in de­sire. The mix­ture of ug­li­ness and beauty is a key here, so the whale is de­picted in a dream­like, beau­ti­ful way.”

Stick­ing with 2D As the bat­tle with the whale proves yet again, few direc­tors can match Hosoda’s abil­ity to blend me­dia in strik­ing, imag­i­na­tive ways. He dis­misses the idea of mak­ing a CG fea­ture. “An­i­ma­tion is draw­ings,” he de­clares. “I don’t think of an­i­ma­tion as an ex­ten­sion of live ac­tion; it’s an ex­ten­sion of the arc of art his­tory. I want to demon­strate the pos­si­bil­i­ties of an­i­ma­tion by us­ing pi­o­neer­ing vis­ual ex­pres­sions, by de­pict­ing familiar mo­tifs that any­one can iden­tify with in a fic­ti­tious world com­pletely dif­fer­ent from our own.”

He con­cludes, “Peo­ple of­ten ask me, ‘Why don’t you make (purely) CG films?’ But in the art world, no­body says oil paints are old and the dig­i­tal art on your tablet is the new thing. I don’t think the tech­niques you use are im­por­tant. What’s im­por­tant is your art it­self: that’s what moves peo­ple emo­tion­ally.” [

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.