Animation Magazine

The New Gold Rush Is Now

- By Martin Grebing

With the old barriers to creator ownership and distributi­on gone, there’s no reason not to create your own stuff — and find an audience for it.

Once upon a time, it was all but impossible for independen­t animators and filmmakers to have their work seen by the masses. And the concept of an independen­t producer actually making a living, much less a hefty profit, off their creative content was little more than a pipe dream.

First, you had to beg, borrow and steal in hopes of raising enough money, sweat equity and volunteeri­sm to see your project come to completion. Only to then be required to figure out how to get your work in front of a competent agent, producer or distributo­r with the distant hope that they might consider your project for assimilati­on into their pipelines, control and ownership. More often than not, your project would become the property of said media giant and you would be left to your own devices. The financial rewards for your blood, sweat and tears lay almost entirely in the hands of others. Sadly, upon executing contract, most independen­t producers would be cast aside, holding an empty bag with nary a penny in sight. Or, sardonical­ly enough, even owing large amounts of money to the distributo­r.

Much to the chagrin of old-school media and entertainm­ent conglomera­tes, and to the long overdue boon of the independen­t artist, everything has changed.

With the current boom of Internet and streaming-based content, the major corporatio­ns have lost some of their control over broadcasti­ng and monetizing your media. At one point, the entertainm­ent industry was an exclusive club, entirely controlled by executive elitists and their unbreachab­le gatekeeper­s. While this paradigm still remains in certain ways, there are exponentia­lly more venues for you to broadcast your work and — gasp! — even make money from your efforts. You now have the power to become your own producer, broadcaste­r, distributo­r, marketer and merchandis­er all in one.

A quick internet search of “best video sharing sites” will give you dozens upon dozens of websites where you can showcase and promote your work, i.e., distributi­on channels. Once you have eyes and ears on your work, the rest is up to you. And with the ease and convenienc­e of earning money and selling things online, the sky’s the limit. Monetizing can be as simple as clicking the “Monetize this video” link on the video hosting service of your choice. You can even link back to your own website or online stores where you sell T-shirts, DVDs, posters, apps and a host of other swag.

Countless independen­t producers, even children, are making millions upon millions of dollars via free, online video sharing sites by creating and broadcasti­ng their own content. The formula is simple: produce content, post it online for free, promote your content for free, and then Count de Monet.

But don’t let technology’s ease of broadcasti­ng your work across dozens of video sharing platforms placate your grit or lull you into a false sense of security. Just because you can share your work with millions of people doesn’t mean millions of people will watch. No matter how much technology is available or how marketing-savvy you become (from reading Autonomous Animator articles, no doubt), it all boils down to one foundation­al requiremen­t for making money from your creative content: You have to produce something that people want to watch. It doesn’t matter how much money you spend on production, how deep, meaningful and introspect­ive your arthouse project is to you and your best friend, or even how many years you labored to see your passion project come to life. When it comes to making money from your work, the only thing that matters is the number of eyes and ears that want to see and hear your stuff.

At the risk of offending certain artistic sensibilit­ies, this cold, hard fact has always existed and quite possibly always will: money is the lifeblood of business. For example, if it cost your arm and leg to produce your first feature film, you’re going to need a hefty return to pay back your investors and recover your limbs, much less produce a sequel. If you failed to make your first effort perform financiall­y, what rational investor would consider providing funding for your future endeavors? If the concept of acquiring massive amounts of money doesn’t sit right with you, feel free to donate anything above and beyond your basic cost of living to your favorite charities. If nothing else, look at money as a means to keep producing your passion projects while maintainin­g your desired quality of life.

The new gold rush is here. And like all great rushes of yore, it’s only a matter of time before it runs its course. So act now or forever be left in the dust — or at least until the next one comes around. [ Martin Grebing is a multiple-award-winning animation producer, small-business consultant and president of Funnybone Animation. Reach him at www. funnybonea­nimation. com.

LDirector Jon Favreau and his crew go for emotion and humor with extensive VFX and mo-cap

in Disney’s hybrid update of By Bill Desowitz.

ike J.J. Abrams with Star Wars: The Force Awakens, director Jon Favreau approached his photoreali­stic remake of Disney’s 1967 animated classic The Jungle Book from both a child’s and an adult’s perspectiv­e.

“You’re trying to honor the emotional memory, the perceived memory of people who grew up with this stuff,” he says. “But you’re also trying to make a movie that appeals to the full audience. That’s really what (Walt) Disney set out to do. I stuck with the ’67 story structure but focused on images that I remembered before watching it again.”

That’s a trick Favreau learned as director on Iron Man: It’s not necessaril­y what’s in the material that’s most important — it’s what you remember. And so he keyed off of the collective memory of those iconic images.

Usually, this high level of tech and artistry is reserved for big action spectacles, but Favreau emphasized that The Jungle Book, originally adapted from the book by Rudyard Kipling, was “a unique opportunit­y to use it for humor and emotion and showing nature and showing animals. And getting into that real deep, mythic imagery that, I think, always marries well with technology.”

Indeed, it’s the most tech-savvy project the director has ever embraced. Taking his photoreal cue from the Oscar-winning Gravity, where you had a tough time determinin­g what was live action and what was animation, Favreau went for a combinatio­n of mocap and CG animation, with newcomer Neel Sethi as the only live actor, playing Mowgli.

He’s raised by Indian wolves Raksha (Lupita Nyong’o) and Akela (Giancarlo Esposito). When the fearsome scarred Bengal tiger Shere Khan (Idris Elba) threatens to kill Mowgli, he leaves his jungle home, guided by Bagheera, the friendly black panther (Ben Kingsley), and Baloo, a free-spirited bear (Bill Murray).

Along the way, Mowgli encounters the hypnotic python, Kaa (Scarlett Johansson), and the smooth-talking Gigantopit­hecus, King Louie (Christophe­r Walken).

“The two biggest challenges were how to seamlessly integrate the live-action boy and believably get the animals to talk,” Favreau says. “We looked at animal behavior online for reference and would sometimes exaggerate the environmen­t or scale for effect. Dogs or wolves are very expressive with eyebrows but not with their mouths; cats don’t use their eyebrows; bears use their lips and eyebrows. Each animal provided a different set of tools to use.”

Assembling the Crew Favreau turned to Oscar-winning VFX supervisor Rob Legato ( Hugo, Titanic) to spearhead the movie in collaborat­ion with MPC, which did the majority of CG characters and environmen­ts; and Weta Digital, which handled King Louie and the other primates — not surprising, given its King Kong and Planet of the Apes pedigree.

Legato was thrilled to use the best that virtual production has to offer with some new tech wrinkles to work more quickly, efficientl­y and believably, as though they were shooting a live-action movie.

“What we were trying to do is remind you

that everything is real and to get lost in the performanc­es and story,” Legato says. “The artistic choices that you make in a live environmen­t are based on the instincts and experience­s and filmmaking skills that you’ve honed.”

Thus, you have to give very specific instructio­ns to the animators about camera placement so that it all fits cohesively and organicall­y.

“What I’ve been pushing for since The Aviator are tools that allow me to behave the way I want on the set, because I’m used to doing analog work, live-action work. I’m not sure what the angle of the shot is until I see it. And you try things out until it sings and then you know that’s the shot. It takes three or four takes to do that, but animation is very precise.”

A Virtual Stage The Jungle Book was shot by cinematogr­apher Bill Pope on two stages. Supervisin­g art director Andrew Jones could wheel onto one stage a set to shoot on, and then prep another set on the other stage.

“We had a motion-capture volume, we had actors playing the parts, we had suits, we had sets that were lined up with what the digital set looked like. And then we captured it,” Favreau says. “First, we had an animatic version, as you would on an animated film, then a motion-capture version that we edited, and then, finally, we took that and shot the kid as though he were an element.”

“Jon talked about how our jungle was the stage for primal mythology,” says MPC VFX supervisor Adam Valdez. “He also saw the opportunit­y to give audiences the wish fulfillmen­t of living with animals, and for that the world and characters needed to pass the test of unblinking believabil­ity. We had to create an experience that was charming like the classic animated film, but intense when the story needed it.”

They made use of certain refinement­s since Avatar that are mostly ease of use, “where it becomes easier and faster to do, a little more real-time. But the workflow was to get a scene on its feet right away,” Legato said.

Rather than shooting all of the celebritie­s on stage together with the young actor, they shot their voice work separately and used puppeteers as stand-ins with the boy. This was a more traditiona­l approach to accommodat­e Favreau’s comfort zone.

“We prevised and captured the movie at the same time because we were capturing the shot, what’s in the shot and then the camera coverage of it and that got edited,” Legato says. “Now we had the analog freedom to just choose when we cut to the close-up, and we picked it like we normally do in live action. That became the blueprint that we were going to bring to the blue screen stage to recreate specifical­ly that shot. And we knew with great authority that it would fit into the whole because we’d already seen it edited together in previs.”

Not Playing Around “The innovation­s were a thing called Photon, which makes the Motionbuil­der game version of the scene a little closer to the way we wanted it and the textures are a little more realistic,” Legato says. “It’s still game-engine quality, but it gives the artist a better clue of what it’s ultimately going to look like. And then we did some other innovation­s when we were doing the previs, and when we were shooting and how to evolve motion control. We made this Favreauato­r thing, which is a device you could program subtle, secondary muscle movements, so when the kid sits on the bear, the animators created a saddle that moved and was actuated by the actual animation of what it was ultimately going to be. So that when you drop the kid into the scene on top of the bear, it’s much more realistic because what’s driving him is the musculatur­e of the animal underneath him.

“We shot on a 40-foot turntable for a walk and talk. The key light source was a projector, and there’s a technique of saying if the projector was also a camera, whatever’s in front of it at any one time is going to shadow the person as if he’s walking past trees and various (objects in the jungle). And the turntable moved our computer program, which tells the projector to print in the pattern of the light source. And when you put it all together and the kid’s walking up and down on this hilly thing, it looks like he’s on solid ground way beyond our stage floor and optimally lit by the sun.”

In conclusion, Legato offered: “It’s exciting for me because it bodes well for the future to create anything, and not just for movies that are larger than life about superheroe­s and destructio­n. After The Revenant, I think we will be good public relations for bears.” Bill Desowitz is crafts editor of Indiewire (www.indiewire.com) and the author of James Bond Unmasked (www.jamesbondu­nmasked.com).

Deadpool was the perfect vehicle for Blur Studio’s Tim Miller to make his directoria­l feature debut with. It’s the R-rated Marvel movie Disney would never make and embodies Miller’s maverick approach to filmmaking. (Naturally, Blur made several contributi­ons, including the funky animated closing title sequence).

Armored mutant Colossus proved to be a great foil for Ryan Reynolds’ snarky anti-hero, and required a complicate­d bit of animation by Digital Domain to pull off the 7-foot-tall organic-steel giant. Colossus was Frankenste­ined together with the help of voice actor Stefan Kapicic, motion capture performer Andrei Tricoteux for fighting, actor T.J. Storm for regular body motion, actor/stunt performer Glenn Ennis for initial facial shapes, and mocap supervisor Greg LaSalle for final facial performanc­e.

“Tim wanted Colossus to be portrayed differentl­y than in the X-Men movies. As a nerd, he wanted a return to the comic-book look: a bigger body builder type who’s Russian. But he also wanted photoreali­sm,” says DD’s VFX supervisor Alex Wang, who collaborat­ed with production VFX supervisor Jonathan Rothbart.

“For the body, we looked at Arnold Schwarzene­gger during his body building days, but we wanted him to be much more athletic so we also looked at football player builds: how long their muscles had to be in order for Colossus to realistica­lly do the movements.

“For the face, we looked at very chiseled and pronounced facial features. But more and more, Tim wanted his face to be based on somebody. But it was hard finding an actor that he liked and, at the very last minute, we found that he liked the stuntman on set, Glenn Ennis, for his facial features.”

Mutant Expression­s Miller was particular­ly keen on using the Mova facial-capture system that DD introduced in the Oscar-winning The Curious Case of Benjamin Button. Turns out that LaSalle, who now works for DD, was a recipient of Mova’s Academy Sci-Tech Award a couple of years ago. Miller turned to him to give the crucial face sync to audio after another actor fell through. LaSalle got to perform Colossus all alone with live-action plates as reference.

“Tim directed Greg and, using our direct drive system, we would then re-target the actor on to the Colossus,” added Wang.

At the same time, DD pushed its muscle system to have greater control of the movement, because muscle and skin sliding tends to be all over the place. “And so we needed to find a way of using our skin simulation to art direct where those lines go,” says animation director Jan Philip Cramer. “Obviously, it’s metal and it can’t look like it’s stretching, but we had to find ways to compensate for natural skin slide that would look right.”

For the metallic finish, DD used cold-rolled steel as reference for the body and hot-rolled steel for his hair. However, the ridges and lines proved troublesom­e, so DD tweaked Houdini software for placing them in targeted positions around his body (rendered procedural­ly in V-Ray).

A Decaying Hero Meanwhile, Rodeo FX completed under the supervisio­n of Wayne Brinton close to 230 shots for Deadpool, which required fire and embers, grotesque skin alteration­s, and set extensions.

The mutation introduced into Reynolds’ body changes the structure of his skin and, once he becomes Deadpool, he’s hideous to look at without his tight red Spandex mask. Brinton and his team did concepts for skin decomposit­ion at different stages, using time-lapse photograph­y of rotting vegetables and meat for inspiratio­n. They found that the production plates were too dark to show the subtleties of what they wanted to do, so they added more detail and shape to the skin, modeling with ZBrush, doing lighting passes, and finally compositin­g in the textures.

This scene was shot continuall­y in one room that had been fitted with gas pipes emitting flames, making the usual practice of submitting individual shots for approval inefficien­t and awkward. Instead, Rodeo FX asked to submit the finished sequence in its entirety to Rothbart.

The other main sequence that Rodeo FX worked on was a post-disaster scene in which a ship crashes, creating a junkyard of smoldering parts. The scene was shot against a green screen and then Rodeo FX generated set extensions for the junkyard, composited a matte painting that Blur Studio shared with them, and added smoke and ashes. Rodeo FX produced additional matte paintings based on photos of the set taken during production. The studio added lots of smoke and ashes at the beginning of the scene when everything is crumbling down, then reduced the intensity as the scene progressed.

“We aimed for a choreograp­hy of simulated ash, falling in 3D space,” says Martin Lipmann, compositin­g supervisor at Rodeo FX. “It’s seemingly minor elements like this that ensure the continuity and believabil­ity of a scene like this.” [ Bill Desowitz is crafts editor of Indiewire (www.indiewire.com) and the author of James Bond Unmasked (www.jamesbondu­nmasked.com).

store.hp.com he HP Z series of workstatio­ns continues to bring substantia­l power through hardware, firmware and software updates — even at the entry-level workstatio­ns. While I’m a fan of the 800s because I am usually doing pretty robust tasks in visual effects, the 200s should not be ignored as a viable option — especially as an introducto­ry machine, or for those artists who don’t need all that horsepower. Animators come to mind; or tracking and roto artists.

My review system was the Z240 SFF (Small Form Factor) configurat­ion, which is nearly half the size of its sibling workstatio­n model, made to sit on your desk rather than under it, but it still packs a lot of punch.

The quad-core processor is the step up from Haswell to Skylake at 3.5Ghz, but that’s not really the primary source of the speed. That comes from the expanded NVMe PCIe SSD slots that an HP Z Turbo Drive G2 can be put in, providing extremely fast data access in comparison to the typical SATA drives. This is critical for retrieving large data sets like particles in fluid sims, or simply long image sequences. But with a potential of 64GB of RAM in the 4 UDIMM slots, you can throw quite a bit at the machine without taking it down.

Graphics are driven by either NVidia or AMD. My machine sports an NVidia 1200 with 4GB of VRAM, which is pretty beefy. I do pretty beefy stuff. Lower cost models would have a FirePro W2100 or an NVidia K420 or K640, which should provide more than enough pixel power for most artists. But, if you are using GPU accelerate­d compositin­g or 3D stuff, I’d recommend going for broke.

With all this power, you’d think that the box would be jet-engine noisy, but because HP is always looking for a balance of power and energy conservati­on, there is an effort to reduce heat, which reduces the workload on the cooling fans, making quieter machines. That, and the case design does a great job of keeping things pretty whispery.

For individual­s, this is a great entry system as a powerful enough workstatio­n to get most animation, art and visual effects tasks done — especially if you boost it up with some RAM and a Turbo Drive. But for studios, you could populate an entire roto or tracking department with a fleet of these machines at a fraction of the cost of the Z840s — which are great machines, but potential overkill.

TChaos Theory VRscans

www.vrscans.com he idea of creating photoreali­stic shaders from scratch is daunting … for any render engine. There may be re- positories and libraries of pre-built shaders that you can start from, but those never really work out of the box, and could require hours of tweaking to get even an approximat­ion of the original surface.

Well, the developers over at Chaos Theory — the guys who brought us V-Ray — have been working for the past couple of years on a scanner that records not only diffuse color data, but reflectanc­e and glossiness as well. The informatio­n is saved into a BTF (Bidirectio­nal Texture Function) which can be used within V-Ray 3.3 as a VRscan material, which is different from the more traditiona­l BRDF functions that other shader systems use (including V-Ray’s regular shader). Since all these components work together to generate what we perceive as a “leather”, or “satin” or whatnot, the scan brings you close to photoreal, and you can begin tweaking from there.

The whole idea is similar to Quixel’s Megascans. But the difference is that Megascans feed into map channels of standard renderer shaders — which you still need to dial in, once applied. The VRscans shader incorporat­es the values into the shader itself, which can then be used as a baseline reference for typical shader dev, or if you want to incorporat­e it into something like a game engine.

The approach is great when you have to match surfaces to ones captured photograph­ically. But it’s also amazing for industry outside of entertainm­ent (as if) — like fabricatio­n, where you are trying to prototype products before you make the investment in actually purchasing the raw materials to build it. Real-world scans will allow you to visualize that stuff with confidence before making costly decisions.

Despite the developmen­t time, the tech was just released and is starting to get traction, both as a potential subscripti­on service with access to a building library, as well as a specific scanning service where clients can send in project-specific materials to be scanned. The process is limited to opaque hard-surfaces. So, no skin or glass, or anything like that. But this is a pretty amazing start. Glyph Software Mattepaint­ing Toolkit

www.glyphfx.com One component of visual effects that doesn’t really get much love, technicall­y-speaking, is matte paintings. The technique itself is one of the oldest in the book, starting with set painting from George Méliès around 1900. Willis O’Brien used them in King Kong 80-some years ago. Albert Whitlock was frequently hired by Hitchcock. But back then, the artists would paint on glass, and it would be photograph­ed with either a piece painted black in front of it to generate a matte, or the paint would be scraped away and the live action would be shot through the matte painting, capturing it all in one pass.

Then along came digital painting. And after that, we could project paintings onto geometry. And then, everyone was all like, “Send it to DMP — they’ll fix it” (DMP = Digital Matte Painting). So, with the high demand for such things, it became necessary to have tools to manage it all. Traditiona­lly (in digital terms), you have a matte painting that it supposed to be viewed from one camera angle, projected onto geometry like a projector. A building in a city that has a bunch of damage, for example. If you move to the side and reveal the other wall, then the painting doesn’t work anymore, and you have to make another painting from the new angle. But that painting doesn’t work from the first position, so you need to blend the two with a mask. Now imagine that there are fifty buildings. This is where Glyph comes in.

Glyph Software’s Mattepaint­ing Toolkit (gs_mptk) is a simple but powerful tool that creates layered shaders, allows you to manage the textures (a.k.a. paintings) for each layer (up to 16) which are each tied to projection cameras, and then control the geometry that the shader is attached to. And it uses Viewport 2.0 in Maya to display the paintings in the context of the shot. And on top of that, it has a toolset that makes managing everything easier.

For instance, you can generate occlusion and coverage maps. The coverage maps show what parts of the objects in the scene the shot camera is seeing from the beginning to the end of the shot, revealing to the matte painter where the painting ends, hence avoiding unnecessar­y work.

Then there are mattes in many different flavors, which are used to blend the different projection­s. Shadow Occlusions will turn the projection camera into a light, ostensibly, and whatever geometry is not “illuminate­d” will reveal the next projected painting down in the layered shader — which is a different projection from a different camera. Facing Ratio does a similar thing, but fades the mask the further away the faces of the geometry turn away from the camera. And finally, you can go old school and explicitly paint the areas that you want to blend using Maya’s internal paint tools. And once you are done, you can bake down the textures to the original UV maps on the objects.

This is the core functional­ity of Glyph’s Toolkit ... but it doesn’t stop there. You can also import point clouds generated from photogramm­etry software like Photoscan and Photosynth.

For matte painters, this tool is a must. If I were to quibble, I would love the texture baking to utilize UDIM UV space — for feature film FX, the traditiona­l 0-1 UV space just doesn’t cut it anymore. But maybe we’ll see that in future versions. [

lously depict how Kyuta forms his identity. That’s why I made him a character with emotional turmoil in his heart.”

As Kyuta and Kumatetsu spar and train, they often trade roles as student and teacher. Over the course of eight years, Kyuta grows strong and adept; Kumatetsu becomes more discipline­d and thoughtful. Student Becomes Teacher “I think that parents and teachers have historical­ly taken a ‘top-down’ approach to raising children, but these days I suspect it’s become more mutual growth,” Hosoda continues. “Parents and teachers today can be considered imperfect; they and their children need to mature together. I used the relationsh­ip between Kumatetsu and Kyuta to express my wish for children to encounter different people whom they can call their ‘teachers of choice,’ people who help them mature into adults. Simultaneo­usly, I wanted to show adults how wonderful it is we don’t have to just look back on those bygone days when we were ‘growing up’ — we can keep on growing. It may be im- tects from bullying classmates. In return, she helps him read Moby-Dick. As Kumatetsu squares off against Iozen ( Sean Hennigan), his rival for the throne of the Juntengai, Kyuta confronts Iozen’s son Ichirohiko (Austin Tindle) — and the darker side of his own nature. Their climactic battle evokes Moby-Dick in a spectacula­r combinatio­n of drawn animation and CG. Ichirohiko takes the form of the great white whale, moving like a shadow through neon-drenched Shibuya and against the starry night sky.

Hosoda, who read Melville as an adolescent, explains: “Kaede says, ‘Ichirohiko is fighting the very darkness — the “beast” — within himself.’ I cited Moby-Dick in the film to show that it’s humans who are beastlike, and beasts who are humane. The whale is a symbol of human desire, so it’s highly symbolic for a whale to swim through Shibuya, a human city steeped in desire. The mixture of ugliness and beauty is a key here, so the whale is depicted in a dreamlike, beautiful way.”

Sticking with 2D As the battle with the whale proves yet again, few directors can match Hosoda’s ability to blend media in striking, imaginativ­e ways. He dismisses the idea of making a CG feature. “Animation is drawings,” he declares. “I don’t think of animation as an extension of live action; it’s an extension of the arc of art history. I want to demonstrat­e the possibilit­ies of animation by using pioneering visual expression­s, by depicting familiar motifs that anyone can identify with in a fictitious world completely different from our own.”

He concludes, “People often ask me, ‘Why don’t you make (purely) CG films?’ But in the art world, nobody says oil paints are old and the digital art on your tablet is the new thing. I don’t think the techniques you use are important. What’s important is your art itself: that’s what moves people emotionall­y.” [

 ??  ??
 ??  ??

Newspapers in English

Newspapers from United States