How to Create More Time, Part 1 of 2
Contrary to popular belief, time is not money. Time is more precious, by far. With all the money in the world, you can’t buy more time. But with a little bit of time, you can make an indeterminate amount of money. Hence, time is exponentially more precious than money.
Other than eating healthier, exercising more and living a less-toxic lifestyle, the assumption is there’s really not much you can do to extend your stay on this planet. However, with some careful measuring and reallocating of resources, it is possible to literally create more usable time while you’re here. Become the Chronoscope You must first measure where you are before you can formulate a plan on how to get where you want to go. This being the case, over the next two weeks, make a log of every single thing you do and the amount of time it takes to do it:
When do you wake up? Do you hit the snooze? What do you do for breakfast? Do you brew your coffee or grab it on the go? What do you do at home, at work, at play? How much TV do you watch? Do you drive to work? How long does it take?
Meticulously document every minute of every day. The more detailed you make your measuring, the more time you will be able to capture. This may sound like a lot of work, but if you take it seriously, tracking everything you do for the next two weeks could be the starting point for adding an untold number of years of usable time to your life.
Automation Now that you’ve documented a typical twoweek slice of your life, search for patterns. Review all your documentation and identify things you do that are repetitious; e.g., habits or rituals. For example, do you wake up every morning and then brew coffee or tea? How long does this take? Do you pay your bills online or in person? How often do you go grocery shopping?
Once you identify patterns, the goal is to find ways to automate these tasks: Use the auto-brew feature on a coffee maker so it’s ready when you wake up. Whenever possible, use auto-pay for your bills. As convenient as paying bills online may seem, doing this on a recurring basis wastes precious moments each and every time you do it. There are a number of grocery and produce delivery services that happily bring fresh food right to your doorstep on a weekly, biweekly or monthly basis, thereby reducing the number of trips you need to make to the store.
Find as many patterns as you can and figure out a way to automate each one so you no longer need to spend waking hours performing these tasks. Once identified and implemented, keep track of these time savings and add them to your running total. Outsourcing Find things you do on a regular basis which you could outsource to someone else. There is a simple rule in business that applies doubly to your life’s timeline: never do something your- self when you can hire someone else to do it, especially if it makes a profit. For example, if you spend 10 hours per week performing a task that can be delegated to someone else, how many more higher-performing tasks can you fill that time with that will generate more benefit for you than the cost of paying someone to do it? If it costs $500 per week to pay someone to free up 10 hours per week of your time that you can use to generate $600 above and beyond what you were doing before, then it’s a win-win for everyone involved.
This also serves as a perfect example of how to use leverage to create more results while benefitting more people from the exact same chunk of your time. If you mow your own lawn, maybe consider hiring someone to do this for you. If you typically clean your house once per week, there are a number of very reasonable house cleaning services that would gladly take this task off your hands, freeing time for you to do other things. Really stretch your comfort zone on this concept and see how many tasks you can outsource, casting obligation and guilt aside.
To be continued ... next issue! Martin Grebing is an award-winning animation director and producer who has focused his career on smaller studios and alternative markets. Today, he provides private consulting and is the president of Funnybone Animation, a boutique studio that produces animation for a wide range of clients and industries. He can be reached via www.funnyboneanimation.com.
Suicide Squad from Warner Bros. and DC is like an outside-the-box version of Marvel’s Guardians of the Galaxy: a group of “disposable assets” recruited for high-risk missions and consisting of the Joker’s girlfriend Harley Quinn (Margot Robbie); elite hit man Deadshot (Will Smith); pyrokinetic El Diablo (Jay Hernandez); thief Captain Boomerang (Jai Courtney); Killer Croc (Adewale Akinnuoye-Agbaje); and mercenary Slipknot (Adam Beach).
Director David Ayer approached military vehicles, weapons and gunfire with the same kind of authenticity he brought to his breakout feature Fury. And that was his VFX mandate for handling the crucial battle scenes.
“We did a lot of work early on to set the rules for those types of things,” says Robert Winter, MPC VFX supervisor in Montreal. “How do tracers photograph in a film camera at 24 frames per second with a wide-open shutter? This allowed us to get our photographic look and motion blur.”
MPC worked on three big gun battles (700 shots out of 1,100); and Sony Pictures Imageworks (in Vancouver and Culver City) handled the climactic fight with the Enchantress (Cara Delevingne) along with the creation of her brother, Incubus (Alain Chanoine), and his battle with a mega version of Diablo.
The first battle occurs when the squad enters Midway City and encounters a group of demon soldiers in a city alley. “That was an interesting sequence from the standpoint that they shot a lot of stunt guys as these demon soldiers in costume, and then we either augmented in CG or added more CG demon soldiers to fill out the numbers,” says Winter.
They were possessed military guys: they ran like humans but were covered in black. “The big creative direction was when they get struck by a bullet, they would shatter into an obsidian-looking rock, so we had these deforming characters that would get hit by a bullet and an arm would break off, and repetitive damage where he’d get hit again and his other arm or head would get knocked off. It was very challenging to take deforming, organic characters and how they perform when intact, destroying them systematically,” says Winter.
Lock and Load The squad used a host of weapons, including Quinn’s bat, bullets, a sword and a boomerang. So MPC created a variety of destruction VFX in Houdini and Kali, the in-house rigid-body simulator. Winter likened the debris to The Matrix but in real-time and without the over crank.
In another battle with the demon soldiers, Diablo, who has forsworn his power for fire creation because of a personal tragedy, lets loose for the first time. Up until then we saw only hints with a dancing flame girl in his hand or the formation of letters in the air. The fire was created with Flowline and rendered in RenderMan.
Another skirmish ensues on a rooftop with a burning skyscraper in the background. There are CG shots of helicopters flying through a canyon of buildings, and the Joker (Jared Leto) arrives to rescue Quinn armed with a mini-gun. There are nearly 100 shots in the sequence consisting of roto wash, simulation of the smoke coming out of the fires, embers in the atmosphere and dynamic fire that’s always changing.
Also, for a flashback, MPC did extensions of the interior of the chemical plant set where the Joker and Harley romantically jump into a vat of acid. They populated the mechanisms and the vats in CG and added the liquid and swirling colors of purple, red and blue from their melted clothes. Taking Possession Finally, MPC did the initial Enchantress possession. For this, Ayer didn’t want it to look magical, so they achieved a transformation containing a black aura that exudes negative ener-
With Steven Spielberg’s Roald Dahl live-action hybrid adaptation, The BFG, Weta Digital refined its Oscar-winning performance-capture work, building on what it has achieved with Avatar and the Planet of the Apes franchise.
“They needed to get into the character right away, so they started building 3D models, using some of Mark Rylance (the actor who played the eponymous 25-foot giant) and some of the design ideas and refining those concepts and showing them to Spielberg,” says Joe Letteri, senior visual effects supervisor for Weta.
“Ultimately, we came up with a design that was a little bit more cartoony than Mark, and that’s what we built for the stage model that he was shooting with when we were doing all the (virtual) simul-cam,” Letteri says. “But, after cutting the movie and turning it over, after seeing Mark more and more in the role, we put a lot of Mark’s features in there, like refining his mouth and eyes. We worked out the last of it in the final four to six months. Mark played the old man grumpiness and we watched that evolve. We spent a long time studying his performance and getting the nuances in the animation.”
Unlike Tintin, which was all performance-capture, there’s a live-action component and a scale difference to The BFG, and Spielberg needed to figure out how to shoot it. He was worried about shooting multiple times to get it all together, which is somewhat unavoidable with scaled characters, according to Letteri.
“We realized that what you really needed was to have Mark and Ruby (Barnhill, who plays the orphan girl, Sophie) act together as much as possible,” Letteri says.
Since Apes, though, they knew they could capture with a fully lighted environment. And with Rylance being such a theatrical performer, they played to his strengths by building a set with minimal props and lighting, designed by multiple Oscar winner Rick Carter. They also put together a previz demo called “Moody MoCap” for this stage set that enabled Spielberg to conceptualize the shoot.
Spielberg liked being part of the world, so they broke down the previz to discern what they could shoot in the virtual volume and on the Moody Mocap stage. “For example, take the cottage scene,” Letteri says. “We did that first as a master. We had a set built to Mark’s proportion as a giant. He could walk in, do everything he wanted, and Steven was there with him. Any time there was a dialogue moment between Mark and Ruby, we’d have a place for her so she could go on her knees and be at the right eye line for him.”
After working out the master, they went to the human-size live-action stage with bluescreens and a giant-size table for Barnhill, and Rylance was up on a platform for the accurate eye line. This allowed them to shoot her while recapturing Rylance.
“We did two things: Traditional simul-cam, where you saw the virtual set, and simul-cap where they captured Mark simultaneously live as he performed with Ruby either on the Moody MoCap set or the live set. Steven could then pick and choose among the two performances with the girl,” Letteri says.
Then there was a third set for the attack of the giants (twice the size of BFG) scaled down and built with a wire-frame mesh. The giant combatants (who were more caricatured) hunched down and when all of the characters were in the same shot, Rylance was on his knees and Barnhill was on her belly to get the eye line right. But everyone could play it live and they pieced it all together. Dreams in a Jar BFG is also dream catcher and there was a lot of creative animation devoted to the scenarios inside the jars where he stores them. “The dreams were a combination of simula- tion and animation because we kept trying to come up with a vocabulary for them,” Letteri says. “Steven wanted them to tell a story but not be too literal so they didn’t look like aliens trapped in a jar.”
But you also couldn’t be too much like a lava lamp, because that’s not interesting.
“For every dream, we came up with a story, and then our simulation team gave us a nice look that was a cross between a nebula in a galaxy but with a flowery effect,” he says. “We kept everything in motion and could dial in how literally you saw the dream at a given point in time. Every shot was choreographed to camera, revealing snapshots of a dream. Therefore, it was never generic movement even if you couldn’t always discern a meaning of the narrative. A lot of particle generation was done with Houdini, which works well with integrated animation pieces.”
But they got stuck on Sophie’s dream. “They kept coming up with literal ideas, such as she’s a dancer, but when you render these ideas in concrete form, they seemed mundane,” Letteri says. “Jamie Beard (the animation supervisor) came up with her dream revealing itself to her as a drawing that materializes in the style of the graphic line illustration (by Quentin Blake). We took the idea to Steven and he liked how it took it away from the literal.” Bill Desowitz is crafts editor of Indiewire (www.indiewire.com) and the author of James Bond Unmasked (www.jamesbondunmasked.com).