Tech Reviews

Animation Magazine - - Opportunities -

Au­todesk used to re­leased its M&E prod­ucts at the same time, giv­ing you a burst of new features in Maya, Max, Mo­tionBuilder, etc. How­ever, now it ap­pears that with the mi­gra­tion to a sub­scrip­tion-based model, Au­todesk is push­ing out re­leases and up­dates through­out the year with­out all the fan­fare that ex­isted in the past. For in­stance, Maya 2018 had its de­but at this year’s SIGGRAPH, and within two months, 2018.1 is al­ready out and avail­able for sub­scribers. It is a tiny, al­most util­i­tar­ian point re­lease in com­par­i­son to 3DS­tu­dio Max 2018 (which we re­viewed last month), but it is in­dica­tive of a de­vel­op­ment ap­proach that puts forth the idea that some features or fixes don’t need to sit around for a year wait­ing for other, far­ther-reach­ing features to be ready. You just qui­etly send out new stuff to the sub­scriber base.

So what about this sub­scriber model, any­way? I’ve heard pros and cons for the shift ever since Adobe re­ally went big with it, and I can see both sides. You have those who would pre­fer to buy a car and own it rather than re­new a lease ev­ery two years and con­tin­u­ally be driv­ing the lat­est model. But from a de­vel­oper stand­point, it seems like ev­ery­one is mov­ing to­ward the sub­scrip­tion mode — and enough users don’t mind that it isn’t go­ing to change any­time soon. Au­todesk is sweet­en­ing the deal beyond the con­sis­tent up­dates with Cloud Rights, which gives to you the abil­ity to launch UI-less li­censes of Max or Maya in the Cloud in or­der to ex­pand your com­put­ing power. The li­cense (or li­censes if you have a multi-user sub­scrip­tion) cov­ers ren­ders, sim­u­la­tions or caching.

But enough with the bor­ing stuff. What’s new in Maya 2018? Well, it seems like ev­ery­one is ex­cited about a new UV work­flow with tools that make lay­ing out UVs faster and more ef­fi­cient. And if you’ve read my past ar­ti­cles, you know how much I’m not a fan of UVs. Any­thing to speed along the process is a god­send.

The love is spread out through the process. XGen keeps get­ting more ro­bust with clump­ing in in­ter­ac­tive groom­ing. An­i­ma­tion and rig­ging have some UI up­dates to make the process less clut­tered with rig con­trollers that turn on and off based on cur­sor prox­im­ity. And mo­g­ra­phers are still gain­ing stride with tools like ad­vanced text tools, di­rect dy­nam­ics through MASH and, most sig­nif­i­cantly in my book, a Live Link to Af­ter Ef­fects — which up now has re­ally been the do­main for Cine­ma4D.

Of course, I won’t be able to fit ev­ery­thing in this space. And now with up­dates com­ing more fre­quently, I’m not go­ing to be able to stay on top of ev­ery­thing. But I prom­ise to do my best! Web­site:­­ucts/ maya Price: $1,470 for an­nual sub­scrip­tion; $185 monthly

those char­ac­ters. If you are us­ing head­gear, then the per­former can move around — be­cause phys­i­cal mo­tion does af­fect the per­for­mance — with­out the cam­era los­ing the fa­cial features. We now need to cap­ture the body mo­tion!

This is where the Xsens MVN suit comes in. For my demon­stra­tion, we had the lat­est Awinda suit, which con­sists of a shirt and a bunch of straps that you loop around your limbs at key points. Each strap holds a match­book-size wire­less tracker with a bunch of tech­nol­ogy in it that mea­sures ac­cel­er­a­tion and mag­netic fields and stuff to as­sess where it is in space. All that data is fed back to a re­ceiver which in­ter­faces with iClone! The higher-end MVN Link suit is a com­plete Ly­cra out­fit with the track­ers wired to­gether, which can sam­ple at a higher rate for more fidelity. But we aren’t as­sess­ing pro ath­lete dy­nam­ics or plan­ning for the best pros­thetic to re­place a leg — I’m just run­ning and jump­ing around my house like a fool, so the Awinda is per­fectly fine. No mat­ter where I ran around, back at my HP Mo­bile Work­sta­tion, we were record­ing both my an­tics and my face, and ap­ply­ing it to the 3D char­ac­ter on the desk­top.

That recorded data can now be post-pro­cessed af­ter the per­for­mance — ei­ther through slid­ers, which tweak the per­for­mance on the fly as you watch it play, or if you want to dig in deep, you can edit at a keyframe level. We can now re­fine the data within iClone it­self and get some cool stuff go­ing — es­pe­cially since the lat­est iClone is get­ting into the Phys­i­cally Based Ren­der­ing game, and in­te­grat­ing real cam­era data. But what if we have a broader pipe­line that uses Maya, or Max, or we are us­ing Un­real or Unity? Real­lu­sion has 3DXchange for both im­port­ing as­sets and data, and ex­port­ing them out to a plethora of other 3D pack­ages.

The over­all setup time for the whole sys­tem was maybe a half hour, which in­cluded mak­ing sure that all the soft­ware was set up on my Mo­bile Work­sta­tion with the lat­est and great­est builds; get­ting the right-sized head­gear for my big head, cal­i­brat­ing the cam­eras, and then get­ting the suit sit­u­ated. (I to­tally rec­om­mend get­ting a friend to help out with this. Putting the suit on alone is a bit frus­trat­ing. Maybe not as frus­trat­ing as say, ty­ing a bow tie, but it’s way eas­ier with some­one to help out.) The head­gear fits in a small Pel­i­can case, in­clud­ing the ca­bles, bat­ter­ies, adapters and Ter­adek video trans­mit­ter. The Xsens fits in a back­pack, and iClone fits inside your lap­top. You are quite lit­er­ally a walk­ing mo­tion-cap­ture stu­dio.

My mind is swirling with ap­pli­ca­tions from pre­viz and pre-pro­duc­tion on films, to plan­ning shoots in VR (block­ing out ac­tors in a space be­fore even get­ting to the space), as well as even non-film ap­pli­ca­tions (God for­bid), like dance anal­y­sis, or mar­tial arts schools — any place where feed­back on phys­i­cal per­for­mance is nec­es­sary to bring peo­ple to the next level. And the porta­bil­ity of the whole sys­tem makes it con­ve­nient and ac­ces­si­ble.

All of these tools fit­ting to­gether gets me ex­cited about how this can ben­e­fit in­de­pen­dent artists who may not have ac­cess to a large mo­tion-cap­ture vol­ume, or an­i­ma­tion teams that just want to quickly block out beats in a per­for­mance and then add on top of it. We are liv­ing in the fu­ture, peo­ple!

[This demon­stra­tion was ac­com­plished us­ing an HP ZBook 15 G3 run­ning Win­dows 7.] iclone.real­lu­ face­ www.xsens Todd Sheri­dan Perry is a vis­ual ef­fects su­per­vi­sor and dig­i­tal artist who has worked on features such as The Lord of the Rings: The Two Tow­ers, Speed Racer and Avengers: Age of Ul­tron. You can reach him at todd@tea­spoon­

says there are too many fac­tors that are beyond our con­trol. “What I want for VR is a dif­fer­ent story,” he notes. “I want it to be part of our daily lives. I want it to be used as a tool to en­hance our com­pas­sion. I be­lieve VR can help us cre­ate a world to our mea­sure: more com­pas­sion­ate, more in­te­gral, more pro­found … and I’d love to see us take full ad­van­tage of all that it has to of­fer.”

The di­rec­tor also points out that cam­era move­ment is go­ing to play a huge role in the fu­ture of VR. “That pas­sive in­ter­ac­tiv­ity is some­thing we should start to work on more deeply,” he says. “There is huge po­ten­tial in the fact that we can have nar­ra­tives that com­pletely immerse you in the story even if they are an­i­mated. I also learned a lot about the im­por­tance of sound and mu­sic, and the tremen­dous abil­ity of VR to gen­er­ate en­gage­ment with the char­ac­ters.”

Melita, which is avail­able ex­clu­sively for the Ocu­lus Rift VR head­set, has al­ready re­ceived praise from both the tech and an­i­ma­tion com­mu­ni­ties. Al­calá says he has been quite touched by the heart­warm­ing re­sponse the short has gen­er­ated. “It leads me to be­lieve that we are reach­ing peo­ple on an emo­tional level, which is what you strive for as a sto­ry­teller,” he says. “The fact that peo­ple are hav­ing dis­cus­sions af­ter see­ing our short, about cli­mate change, about our fu­ture as a species on this Earth, as well as our re­la­tion­ship to AI … that is pro­foundly re­ward­ing.” For more in­for­ma­tion, visit fu­ture­light­

In the early 16th cen­tury, a war­rior-monk van­quished a pow­er­ful de­mon by pin­ning it to a rock with the Beast Spear, an en­chanted weapon forged in China cen­turies ear­lier. Five hun­dred years later, high school stu­dent Ushio Aot­suki (David Ma­tranga), a de­scen­dant of the monk, dis­cov­ers the de­mon is still trans­fixed to the cel­lar wall of his fam­ily’s Shinto shrine. Although he looks like a cross be­tween a man and lion, the de­mon’s ochre color and stripes earned him the name Tora (“tiger,” voiced by Brett Weaver).

Af­ter Tora per­suades Ushio to free him, the teenager faces two grave prob­lems: A mon­ster in­tent on de­vour­ing him and a pow­er­ful mag­i­cal weapon he doesn’t know how to wield. For­tu­nately, the Beast Spear has a mind of its own, and Ushio is able to keep his foe in check. But free­ing Tora has at­tracted hordes of mi­nor demons and yokai (“mon­sters”). They be­gin at­tack­ing peo­ple, in­clud­ing Ushio’s two friends: dark-haired, straight­for­ward Asako Nakamura (Al­li­son Sum­rall) and gen­tle Mayuko Inoue (Luci Chris­tian). To pro­tect his friends and de­feat the weird-look­ing yokai, Ushio must work with Tora, de­spite their mu­tual mis­giv­ings.

Tora has to cope with the changes 500 years have brought to life in Ja­pan. He learns to dodge cars and buses, and dis­cov­ers that while they’re not as tasty as hu­mans, he likes ham­burg­ers. Although Tora in­sists he’ll de­vour Ushio, and

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.