How to Cre­ate More Time, Part 1 of 2

Animation Magazine - - Tv - By Martin Gre­bing

Con­trary to pop­u­lar be­lief, time is not money. Time is more pre­cious, by far. With all the money in the world, you can’t buy more time. But with a lit­tle bit of time, you can make an in­de­ter­mi­nate amount of money. Hence, time is ex­po­nen­tially more pre­cious than money.

Other than eat­ing health­ier, ex­er­cis­ing more and liv­ing a less-toxic lifestyle, the as­sump­tion is there’s re­ally not much you can do to ex­tend your stay on this planet. How­ever, with some care­ful mea­sur­ing and re­al­lo­cat­ing of re­sources, it is pos­si­ble to lit­er­ally cre­ate more us­able time while you’re here. Be­come the Chrono­scope You must first mea­sure where you are be­fore you can for­mu­late a plan on how to get where you want to go. This be­ing the case, over the next two weeks, make a log of ev­ery sin­gle thing you do and the amount of time it takes to do it:

When do you wake up? Do you hit the snooze? What do you do for break­fast? Do you brew your cof­fee or grab it on the go? What do you do at home, at work, at play? How much TV do you watch? Do you drive to work? How long does it take?

Metic­u­lously doc­u­ment ev­ery minute of ev­ery day. The more de­tailed you make your mea­sur­ing, the more time you will be able to cap­ture. This may sound like a lot of work, but if you take it se­ri­ously, track­ing ev­ery­thing you do for the next two weeks could be the start­ing point for adding an un­told num­ber of years of us­able time to your life.

Au­to­ma­tion Now that you’ve doc­u­mented a typ­i­cal twoweek slice of your life, search for pat­terns. Re­view all your doc­u­men­ta­tion and iden­tify things you do that are rep­e­ti­tious; e.g., habits or rit­u­als. For ex­am­ple, do you wake up ev­ery morn­ing and then brew cof­fee or tea? How long does this take? Do you pay your bills on­line or in per­son? How of­ten do you go gro­cery shop­ping?

Once you iden­tify pat­terns, the goal is to find ways to au­to­mate these tasks: Use the auto-brew fea­ture on a cof­fee maker so it’s ready when you wake up. When­ever pos­si­ble, use auto-pay for your bills. As con­ve­nient as pay­ing bills on­line may seem, do­ing this on a re­cur­ring ba­sis wastes pre­cious mo­ments each and ev­ery time you do it. There are a num­ber of gro­cery and pro­duce de­liv­ery ser­vices that hap­pily bring fresh food right to your doorstep on a weekly, bi­weekly or monthly ba­sis, thereby re­duc­ing the num­ber of trips you need to make to the store.

Find as many pat­terns as you can and fig­ure out a way to au­to­mate each one so you no longer need to spend wak­ing hours per­form­ing these tasks. Once iden­ti­fied and im­ple­mented, keep track of these time sav­ings and add them to your run­ning to­tal. Out­sourc­ing Find things you do on a reg­u­lar ba­sis which you could out­source to some­one else. There is a sim­ple rule in busi­ness that ap­plies dou­bly to your life’s time­line: never do some­thing your- self when you can hire some­one else to do it, es­pe­cially if it makes a profit. For ex­am­ple, if you spend 10 hours per week per­form­ing a task that can be del­e­gated to some­one else, how many more higher-per­form­ing tasks can you fill that time with that will gen­er­ate more ben­e­fit for you than the cost of pay­ing some­one to do it? If it costs $500 per week to pay some­one to free up 10 hours per week of your time that you can use to gen­er­ate $600 above and be­yond what you were do­ing be­fore, then it’s a win-win for every­one in­volved.

This also serves as a per­fect ex­am­ple of how to use lever­age to cre­ate more re­sults while ben­e­fit­ting more peo­ple from the ex­act same chunk of your time. If you mow your own lawn, maybe con­sider hir­ing some­one to do this for you. If you typ­i­cally clean your house once per week, there are a num­ber of very rea­son­able house clean­ing ser­vices that would gladly take this task off your hands, free­ing time for you to do other things. Re­ally stretch your com­fort zone on this con­cept and see how many tasks you can out­source, cast­ing obli­ga­tion and guilt aside.

To be con­tin­ued ... next is­sue! Martin Gre­bing is an award-win­ning an­i­ma­tion di­rec­tor and pro­ducer who has fo­cused his ca­reer on smaller stu­dios and al­ter­na­tive mar­kets. To­day, he pro­vides pri­vate con­sult­ing and is the pres­i­dent of Fun­ny­bone An­i­ma­tion, a bou­tique stu­dio that pro­duces an­i­ma­tion for a wide range of clients and in­dus­tries. He can be reached via www.fun­ny­bonean­i­ma­tion.com.

Sui­cide Squad from Warner Bros. and DC is like an out­side-the-box ver­sion of Marvel’s Guardians of the Galaxy: a group of “dis­pos­able as­sets” re­cruited for high-risk mis­sions and con­sist­ing of the Joker’s girl­friend Har­ley Quinn (Mar­got Rob­bie); elite hit man Dead­shot (Will Smith); py­roki­netic El Di­ablo (Jay Her­nan­dez); thief Cap­tain Boomerang (Jai Court­ney); Killer Croc (Ade­wale Akin­n­uoye-Ag­baje); and mer­ce­nary Slip­knot (Adam Beach).

Di­rec­tor David Ayer ap­proached mil­i­tary ve­hi­cles, weapons and gun­fire with the same kind of au­then­tic­ity he brought to his break­out fea­ture Fury. And that was his VFX man­date for han­dling the cru­cial bat­tle scenes.

“We did a lot of work early on to set the rules for those types of things,” says Robert Win­ter, MPC VFX su­per­vi­sor in Mon­treal. “How do trac­ers pho­to­graph in a film cam­era at 24 frames per sec­ond with a wide-open shut­ter? This al­lowed us to get our pho­to­graphic look and mo­tion blur.”

MPC worked on three big gun bat­tles (700 shots out of 1,100); and Sony Pic­tures Image­works (in Van­cou­ver and Cul­ver City) han­dled the cli­mac­tic fight with the En­chantress (Cara Delev­ingne) along with the cre­ation of her brother, In­cubus (Alain Chanoine), and his bat­tle with a mega ver­sion of Di­ablo.

The first bat­tle oc­curs when the squad en­ters Mid­way City and en­coun­ters a group of de­mon sol­diers in a city al­ley. “That was an in­ter­est­ing se­quence from the stand­point that they shot a lot of stunt guys as these de­mon sol­diers in cos­tume, and then we ei­ther aug­mented in CG or added more CG de­mon sol­diers to fill out the num­bers,” says Win­ter.

They were possessed mil­i­tary guys: they ran like hu­mans but were cov­ered in black. “The big cre­ative di­rec­tion was when they get struck by a bul­let, they would shat­ter into an ob­sid­ian-look­ing rock, so we had these de­form­ing char­ac­ters that would get hit by a bul­let and an arm would break off, and repet­i­tive da­m­age where he’d get hit again and his other arm or head would get knocked off. It was very chal­leng­ing to take de­form­ing, or­ganic char­ac­ters and how they per­form when in­tact, de­stroy­ing them sys­tem­at­i­cally,” says Win­ter.

Lock and Load The squad used a host of weapons, in­clud­ing Quinn’s bat, bul­lets, a sword and a boomerang. So MPC cre­ated a va­ri­ety of de­struc­tion VFX in Hou­dini and Kali, the in-house rigid-body sim­u­la­tor. Win­ter likened the de­bris to The Ma­trix but in real-time and without the over crank.

In an­other bat­tle with the de­mon sol­diers, Di­ablo, who has for­sworn his power for fire cre­ation be­cause of a per­sonal tragedy, lets loose for the first time. Up un­til then we saw only hints with a dancing flame girl in his hand or the for­ma­tion of let­ters in the air. The fire was cre­ated with Flow­line and ren­dered in Ren­derMan.

An­other skir­mish en­sues on a rooftop with a burn­ing sky­scraper in the back­ground. There are CG shots of heli­copters fly­ing through a canyon of build­ings, and the Joker (Jared Leto) ar­rives to res­cue Quinn armed with a mini-gun. There are nearly 100 shots in the se­quence con­sist­ing of roto wash, sim­u­la­tion of the smoke com­ing out of the fires, em­bers in the at­mos­phere and dy­namic fire that’s al­ways chang­ing.

Also, for a flash­back, MPC did ex­ten­sions of the in­te­rior of the chem­i­cal plant set where the Joker and Har­ley ro­man­ti­cally jump into a vat of acid. They pop­u­lated the mech­a­nisms and the vats in CG and added the liq­uid and swirling col­ors of pur­ple, red and blue from their melted clothes. Tak­ing Posses­sion Fi­nally, MPC did the ini­tial En­chantress posses­sion. For this, Ayer didn’t want it to look mag­i­cal, so they achieved a trans­for­ma­tion con­tain­ing a black aura that ex­udes neg­a­tive ener-

With Steven Spiel­berg’s Roald Dahl live-ac­tion hy­brid adap­ta­tion, The BFG, Weta Dig­i­tal re­fined its Os­car-win­ning per­for­mance-cap­ture work, build­ing on what it has achieved with Avatar and the Planet of the Apes fran­chise.

“They needed to get into the char­ac­ter right away, so they started build­ing 3D mod­els, us­ing some of Mark Ry­lance (the ac­tor who played the epony­mous 25-foot gi­ant) and some of the de­sign ideas and re­fin­ing those con­cepts and show­ing them to Spiel­berg,” says Joe Let­teri, se­nior vis­ual ef­fects su­per­vi­sor for Weta.

“Ul­ti­mately, we came up with a de­sign that was a lit­tle bit more car­toony than Mark, and that’s what we built for the stage model that he was shooting with when we were do­ing all the (vir­tual) simul-cam,” Let­teri says. “But, af­ter cut­ting the movie and turn­ing it over, af­ter see­ing Mark more and more in the role, we put a lot of Mark’s fea­tures in there, like re­fin­ing his mouth and eyes. We worked out the last of it in the fi­nal four to six months. Mark played the old man grumpi­ness and we watched that evolve. We spent a long time study­ing his per­for­mance and get­ting the nu­ances in the an­i­ma­tion.”

Un­like Tintin, which was all per­for­mance-cap­ture, there’s a live-ac­tion com­po­nent and a scale dif­fer­ence to The BFG, and Spiel­berg needed to fig­ure out how to shoot it. He was wor­ried about shooting mul­ti­ple times to get it all to­gether, which is some­what un­avoid­able with scaled char­ac­ters, ac­cord­ing to Let­teri.

“We re­al­ized that what you re­ally needed was to have Mark and Ruby (Barn­hill, who plays the or­phan girl, So­phie) act to­gether as much as pos­si­ble,” Let­teri says.

Since Apes, though, they knew they could cap­ture with a fully lighted en­vi­ron­ment. And with Ry­lance be­ing such a the­atri­cal per­former, they played to his strengths by build­ing a set with min­i­mal props and light­ing, de­signed by mul­ti­ple Os­car win­ner Rick Carter. They also put to­gether a pre­viz demo called “Moody MoCap” for this stage set that en­abled Spiel­berg to con­cep­tu­al­ize the shoot.

Spiel­berg liked be­ing part of the world, so they broke down the pre­viz to dis­cern what they could shoot in the vir­tual vol­ume and on the Moody Mocap stage. “For ex­am­ple, take the cot­tage scene,” Let­teri says. “We did that first as a mas­ter. We had a set built to Mark’s pro­por­tion as a gi­ant. He could walk in, do ev­ery­thing he wanted, and Steven was there with him. Any time there was a di­a­logue mo­ment be­tween Mark and Ruby, we’d have a place for her so she could go on her knees and be at the right eye line for him.”

Af­ter work­ing out the mas­ter, they went to the hu­man-size live-ac­tion stage with blue­screens and a gi­ant-size ta­ble for Barn­hill, and Ry­lance was up on a plat­form for the ac­cu­rate eye line. This al­lowed them to shoot her while re­cap­tur­ing Ry­lance.

“We did two things: Tra­di­tional simul-cam, where you saw the vir­tual set, and simul-cap where they cap­tured Mark si­mul­ta­ne­ously live as he per­formed with Ruby ei­ther on the Moody MoCap set or the live set. Steven could then pick and choose among the two per­for­mances with the girl,” Let­teri says.

Then there was a third set for the at­tack of the gi­ants (twice the size of BFG) scaled down and built with a wire-frame mesh. The gi­ant com­bat­ants (who were more car­i­ca­tured) hunched down and when all of the char­ac­ters were in the same shot, Ry­lance was on his knees and Barn­hill was on her belly to get the eye line right. But every­one could play it live and they pieced it all to­gether. Dreams in a Jar BFG is also dream catcher and there was a lot of cre­ative an­i­ma­tion de­voted to the sce­nar­ios in­side the jars where he stores them. “The dreams were a com­bi­na­tion of sim­ula- tion and an­i­ma­tion be­cause we kept try­ing to come up with a vo­cab­u­lary for them,” Let­teri says. “Steven wanted them to tell a story but not be too lit­eral so they didn’t look like aliens trapped in a jar.”

But you also couldn’t be too much like a lava lamp, be­cause that’s not in­ter­est­ing.

“For ev­ery dream, we came up with a story, and then our sim­u­la­tion team gave us a nice look that was a cross be­tween a neb­ula in a galaxy but with a flow­ery ef­fect,” he says. “We kept ev­ery­thing in mo­tion and could dial in how lit­er­ally you saw the dream at a given point in time. Ev­ery shot was chore­ographed to cam­era, re­veal­ing snap­shots of a dream. There­fore, it was never generic move­ment even if you couldn’t al­ways dis­cern a mean­ing of the nar­ra­tive. A lot of par­ti­cle gen­er­a­tion was done with Hou­dini, which works well with in­te­grated an­i­ma­tion pieces.”

But they got stuck on So­phie’s dream. “They kept com­ing up with lit­eral ideas, such as she’s a dancer, but when you ren­der these ideas in con­crete form, they seemed mun­dane,” Let­teri says. “Jamie Beard (the an­i­ma­tion su­per­vi­sor) came up with her dream re­veal­ing it­self to her as a draw­ing that ma­te­ri­al­izes in the style of the graphic line il­lus­tra­tion (by Quentin Blake). We took the idea to Steven and he liked how it took it away from the lit­eral.” Bill De­sowitz is crafts ed­i­tor of Indiewire (www.indiewire.com) and the au­thor of James Bond Un­masked (www.james­bon­dun­masked.com).

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.