So, let’s talk about this CrazyTalk Animator 3. In last issue’s discussion about Perception Neuron, I mentioned that 3D motion-capture data could be fed into CTA3 and be applied to 2D characters through the bone deformation system. But that is certainly not the end of CTA3 functions.
The bone-based skeletal system has quite a few applications in a number of different areas within CTA3. The deforming capabilities allow you to attach bones to any imported image to add subtle — or not-so-subtle — animation. Get a scan of the Mona Lisa, extract her from the background, bring her into CTA3 and attach a series of bones, and then add a little head bob to some music.
This bone-driven approach allows you to break up characters for more complex animations, by breaking apart limbs and head from body, and mask out the influences, so the arms don’t affect the chest, for instance.
Then, these bone systems can be saved as templates and you can swap out the characters, but use the same bone systems and same animations between them. So maybe you have five zombies. You can animate one and use the bone setup and animation as at least a foundation for the others. (You wouldn’t want to use exactly the same animation because zombies are individuals, of course.)
Included in CTA3 is a library of human and animal motions, which can be layered into the timeline as sequences of animation that blend into one another. You can then take sprites that you’ve built that are components of your character, and attach them to the correlating pieces on the template, including accessories like hats, jewelry, etc.
Facial animation has been enhanced with some key audio features, like scrubbing and text-tospeech tools for syncing the audio to phonemes. But with an added freeform deformation tool, you can add more movement into your original sprites to put in some additional personality. The facial animation system has been expanded beyond human faces to include animals, too.
There are definitely more things to find in the CrazyTalk Animator package, including drag-and-drop animation behaviors and curves, customized FFDs for props, and expression-based animations. As well as the integration of 3D motion, including motion-capture data, as mentioned in conjunction with the Perception Neuron
But try it out for yourself, or ask clients like Jimmy Kimmel, HBO, and Keanu Reeves, for starters.
nod. The alSurface that Arnold lovers cherish has been adapted for V-Ray, primarily as a complex skin shader. And MDL shaders from the NVidia library have been incorporated, as well as Forest Color support.
Furthermore, a ton of stuff has been pushed to the GPU for faster processing, including in-buffer lens effects, aerial perspective, V-Ray clipper, directional area lights, stochastic flakes, rounded corners, matte shadow, render mask, irradiance maps, and on-demand MIP-mapping. And they threw in a low GPU thread priority for load balancing.
Everyone loves beautiful renders. But everyone loves them more when they’re faster!
Allegorithmic has been going strong ever since it came out of the gate with Substance Designer and Substance Painter. Taking the game and visual-effects industry by storm with its PBR approach to texture and shader design, as well as the intelligent workflow for the dynamic shaders that use the extra maps such as normals, height, occlusion, etc., to drive how the shader behaves. These are the “substances.” And Substance Designer is where they are built.
In Substance Designer 6.0, Allegorithmic appears to have found room to make a powerful piece of software just that much more powerful. Among all kinds of preferences to make user experience better, and some tweaks under the hood to make things faster, there have been a number of new nodes to play with in your Substance script.
A seemingly innocuous but deceivingly powerful addition is the curve node. We are all familiar with controlling colors and such with curves in Photoshop or Nuke or any number of color grading tools. And in SD6, you can drive color corrections or gamma or whatnot with bezier nodes on the curve. That’s the bread and butter stuff though. Remember, in Substance Designer, you have other map parameters that can be affected, like normals and height. By feeding the curves into the height parameter, you are essentially defining the equivalent of a loft profile — the curve defining the top surface of the geometry the substance is attached to. Think of it like wainscoting on a wall, or intricate Rococo etching — all without the extra geometry.
The text node is a similar and simple node that allows you to add text to the substance (duh!), driven by system or custom fonts, and fully tileable.
Node can now be in 16-bit or 32-bit float, taking advantage of high-dynamic ranges, allowing for internal creation and editing of HDR environments for lighting. And you can now bake out textures to 8K! But my favorite is the ability to shoot and process your own surface textures. By taking a sampling of your material with the lights at different angles, you can, through Substance Designer 6.0, extract proper normal, height and albedo maps — on top of the color, to get more precise replication of real-world material. Something pertinent to shader development both inside and outside of the Substance Designer workflow.
As I said earlier, a super strong release to an already super strong product. [ Todd Sheridan Perry is a visual-effects supervisor and digital artist who has worked on features including The Lord of the Rings: The Two Towers, Speed Racer, 2012, Final Destination 5 and Avengers: Age of Ultron. You can reach him at firstname.lastname@example.org.
through New Port City were dubbed “ghost cams.”
“Guillaume Rocheron used Google Maps to create a rough version of the camera going through Hong Kong,” says Bonami. “Originally, the idea was to have drone footage, but it didn’t work out because of Hong Kong regulations. A team spent nights going up onto rooftops and taking photographic footage along the path the camera would travel.”
Miniatures were made of monolithic buildings that were turned into CG versions. “We put in many 3D props,” says Bonami. “Once everyone was happy with the camera move, Rupert and Guillaume decided where to put the storytelling solograms. Then we could start putting in the secondary ones as well as directing crowds, cars, highways and street signage.”
Unlike the Major (Johansson), Kuze, played by Michael Pitt, is viewed to be a failed robotic experiment.
“Rupert had us strip the design down in order to see more of the internals, muscles and skeleton, so to emphasize that Kuze mirrors Scarlett’s character,” says Bonami. “Most of Michael Pitt was replaced, but we kept his eyes, lips and the subtleness of his expression.”
Clockwork mechanisms are revealed beneath the shell encasement.
“We cheated the lighting of the skeletons by adding some subtle rim lighting,” Bonami says. “Guillaume brought the skeleton prop (created by Weta Workshop) on set, so we had a lighting refer- ence. The prop muscles were too much like clear plastic, so we added a Gummy Bear feel to make them look more organic.”
Anime Expectations Living up to the anime was another challenge, especially for some of the bestknown scenes. “The shelling sequence is one of the most iconic sequences from the anime so there was no pressure at all!” Bonami says with a laugh. “We wanted to add a photorealistic look to it. Everything was shot with the skeleton prop, which was replaced with a CG version for camera moves and reframing. There are still a few shots with the practical skeleton. We studied with Rupert the color balance for the scene. When she comes out of the white liquid, we used practical and CG shots.”
The fluid simulations were complicated by having to match the live-action footage. “The previs gave us a good reference, but we wanted to have skin scatter and look slightly transparent as well as to get the right elevation towards camera, so that framing worked with all of the shots,” he says.
In another key sequence, Major leaps off of a high-rise building. “We had great reference of a stunt person wearing the thermoptic suit jumping off of the roof,” says Bonami, who had to make the invisible Major visible for the viewer. “The thermoptic suit doesn’t always work well, so that is when the high-tech outfit reveals how it works. However, you still have
“Yeah, there were some days when we just realized the production was going to be shut down because if we went outside in all of that wind, it was going to be bad — very, very bad,” says Wassel, a veteran of The Fast and the Furious, Fast Five and Furious 7.
Window Shots Although not as exotic or unusual as Iceland, Cleveland played a significant role in F8 by subbing for New York, where filming is pretty difficult if you want to stage a high-speed chase with a half dozen high-powered cars. Over the course of three weeks, crews used city streets and pushed cars out of windows for one of the more memorable sequences in the film.
This time again, the secret sauce was a mix- ture of real cars being pushed out of the windows of a building in addition to CG cars based on the information gained by shooting the real ones falling.
“There were some cars that were weighted to fall a certain way to get a specific look to the fall, and there were others that we adjusted for other kinds of impacts,” says McIlwain. “A tremendous amount of thought and planning goes into destroying the cars, but not all of them are just thrown away at the end, since it makes more sense to recycle or reuse them when you can, even if it means having to fix them up a bit after you’ve tossed them around.”
In one of the more chilling scenes, Cipher, the film’s cyber villain, unleashes a squad of self-driving cars that have been hacked. The filmmakers refined this plot point before fears of actual hacking into car computer systems became part of the headlines, and now it seems almost prescient. Here the cars were both real and digital again and shot on the streets of Cleveland without any of the A-list cast on set. Later on, the VFX team merged the footage of the cars and the actors to bring together the edgy series of events.
“It’s really all about imagining something fun that will make it worthwhile for the people who come to see these movies so they keep coming back to see them when a new one is made,” says Wassel. [ Karen Idelson is an entertainment technology writer whose dog wasn’t too happy about staying inside while she wrote this article.
there goes six months of my life,’” says Jarrett, with a laugh.
There were similar replacements done for the Laura character played by Dafne Keen and the stuntwomen who doubled her on screen. Complex scans were done of Keen and each stunt performer so that they could match the images in action sequences so that they could become as believable as possible. As time-consuming as the process can be, you can’t argue with the results. Early effects to do facial, head and neck replacement done in films like Titanic yielded mixed results at best. Since then, the process has become more refined and filmmakers have learned from past attempts to create this illusion.
“What I found was that we’d solved most of the technological issues, really,” says Jarrett. “But what we really needed was lots and lots of time to noodle with our shots because we’re used to seeing faces and we know what they should look like. And then, if you’re dealing with Hugh’s face, then you’ve got all that baggage on top of that as well, so you’re going to have to spend time with lighting and other things to bring the face to the point where it looks real to the audience, and their preconceived ideas about how Hugh should look play into all of that.” Time Management Visual-effects artist often had to work through the details, making subtle and even unexpected tweaks that make the replacements seamless. For Jarrett, who was on the film for about 15 months, time and craft became key.
Jarrett, who is also a veteran of Harry Potter and the Sorcerer’s Stone and Sweeney Todd: The Demon Barber of Fleet Street, credits Mangold with giving the film its unique look and imagining visual effects that worked with a pared-down — while still imaginative look — for Logan. Mangold himself has mentioned the visual reference points of films such as The Wrestler and classic films such as Shane, The Cowboys and The Gauntlet when designing his approach to the story.
“I learned an awful lot from Jim (Mangold) in terms of action and storytelling,” says Jarrett. “Developing the scene where there was a chase to the train was just a really fun sequence to work on and Jim’s ideas about how (Logan) shouldn’t just crash through the fence and everything (Logan) does goes wrong and he has to think of another way to do things, that’s essentially every beat of the scene. He always wanted to find a way to make it more fun and more about the character.” [ Karen Idelson is an entertainment technology writer and former visual-effects artist who lives in the South Bay.