3D World

Baba Yaga: Shaping the Story

Trevor Hogg uncovers how Baobab Studios makes the viewer an important character in Baba Yaga…

-

We explore this immersive VR experience by Baobab Studios

What if the viewer could be the protagonis­t in the narrative? Baobab Studios has been attempting to answer that question both creatively and technicall­y since being founded by CEO Maureen Fan, CCO Eric Darnell and CTO Larry Cutler in 2015. Each project from Invasion! to Baba Yaga has expanded the understand­ing of virtual reality as an artform. “The power of VR is immersion, so it got down to us thinking about how we can tell stories that make the viewer matter,” states Darnell, who previously made a name for himself at Dreamworks Animation by co-directing Antz and the Madagascar franchise. “Not just where they sit there and watch a

“We had to find ways for the AI system to work with handcrafte­d animation” Eric Darnell, Baobab Studios CCO, director of Baba Yaga

story unfold, or like a game where they have to go through a series of puzzles or challenges.” Simple gestures such as a character handing an object to the viewer can be equally engaging. “The lantern [in Baba Yaga] was difficult because it is a real-time element,” remarks Nathaniel Dirksen, digital supervisor and head of Engineerin­g, Baobab Studios. “If you give the viewer the ability to shine a light around the scene you’ve just made life harder, but that’s cool so you do it anyway!”

Wireless headsets, hand tracking and machine learning have elevated the interactiv­ity capability of Baba Yaga, which revolves around a witch (Kate Winslet) in a rainforest encounteri­ng Magda (Daisy Ridley) and her older sibling (the viewer) as they seek a mysterious flower to cure their ailing mother (Glenn Close). “We’re running on the Oculus Quest which is fantastic because it’s an untethered headset,” notes Dirksen. “You can walk around your room and do whatever you want. However, it is powered by a phone video rendering chip which is way less than you get on a computer. There were a lot of visual challenges in terms of coming up with a visual aesthetic that looks good and is compelling, but is also able to run on this low-end hardware.”

Both hand controller­s and hand tracking can be utilised by viewers. “There are certainly trade-offs because you get some haptic feedback with the hand controller­s that you don’t get when it’s just tracking your hands,” notes Darnell. “The future is going to be putting both of those things together probably with some kind of glove that gives you that tactile feedback, but still allows you to be natural and gestural.” Machine learning has enabled more credible reactions by the various characters, such as not

always responding the same way to repetitive actions. “We first dug into machine learning with Bonfire. The viewer has this robot sidekick voiced by Ali Wong. You can throw a log and hit her, and she will actually go, ‘Will you stop that!?’ then carry on with whatever she was doing. If you do it again, she’s like, ‘Quit it!’ Eventually, if you keep doing it, she’ll turn away and won’t pay any attention to you for a while. These are things that you might expect. It’s not like every time you throw a log at her that she says, ‘Will you quit it!?’ That’s when you see the wizard behind the curtain.”

Not only did various animation clips had to be created for branching narrative sections of the story, but also seamlessly integrated. “We had to find ways for the AI system to work with handcrafte­d animation that trained animators have done, and for it to pick from this big library of potential actions, insert them into the scene, and actually modify everything at the same time; it matters where the character is, how they’re oriented to the viewer, and where the object of its desire is. It became a complex problem, but it ultimately paid off for us,” reveals Darnell. “The most important thing for the animators and I was making sure that these characters had a mind. Going through some action is far less important than seeing that internal dialogue. Often, I would sit down with the animators and we would actually write an internal dialogue that a character would be having inside their head; the animator would animate using that as a script even though the character does not voice any of it.”

“We did have to come up with a way of combining previs and storyboard­s so you could iterate fast, but on a stage where you can start playing around with interactiv­ity,” states Ken Fountain, animation supervisor, Baobab Studios. “We are still animating in Maya and try our best not to rely on one camera angle. Generally, we know where the viewer is going to be and animate from that angle, making sure that it looks good in Maya. But we also had

to develop a piece of our pipeline that allowed the animators to quickly see their stuff inside of VR. By the time we got to Baba Yaga, we were getting good at giving the animators a quick way to put the headset on and see what they had just been working on.” A film-quality rig was created for Magda. “Our pipeline allows us to make complex skeletal rigid deformatio­ns and freeform deformatio­ns in a rig inside of Unity, which in a regular game environmen­t you normally can’t do. You need to pick your battles because it’s memory sensitive. But if your main character is emoting, has the physicalit­y and the artful silhouette that you want, then it makes the whole experience so much better.”

A mask is worn by the titular character, which complicate­d her ability to emote. “There is a beautiful simplistic Japanese theatrical style known as Noh,” states Fountain. “It’s like the old Italian theatre with the performers wearing different archetypic­al masks. We studied a lot of that. There were specific dailies reviews with the director where we put up three or four different examples of how Baba Yaga could move. We finally landed on a jellyfish style and Noh hand gestures. The big bird neck was great because we could pull from what we learned from Crow: The Legend. Not only did we have [the intensity of her glowing] eyes to help with her dialogue, we asked our rigger to put a wave deformer into her cape so it could be used as an emotional metre – so when she is angry the magical wind through that cape increased. When Baba Yaga was being snarky and more like a person, the cape would calm down. You need that pre-production time to be able to pull off those little tools to use.”

STORYTELLI­NG IN VR

Being able to tell a story where the viewer is invested in the characters is what drives the technologi­cal developmen­t for Baobab Studios.

“A lot of VR when we started were these tech demos showing off all of this cool stuff that you can do, but there wasn’t any story or meat on the bones associated with it,” notes Cutler. “We always start with the story and characters.”

“We do our animation work in Maya and real-time work in Unity,” he continues. “These are both amazing platforms to be able to develop on top of. We have built an entire interactiv­e VR content creation platform called Storytelle­r that sits on top of Maya and Unity but also consists of standalone components that link workflows between the DCCS [Digital Content Creation tools].”

For the 2D version of Baba Yaga, a new toolset was produced to allow director Eric Darnell to capture the necessary footage within the realm of VR. “Eric would be in a VR headset, his hand controller­s would control the camera, and he could take the VR footage and shoot the 2D cameras from that,” explains Cutler. “It was something that was completely necessary based on what we were telling in regards to the story.”

As with the previous projects the biggest challenge was making the viewer have a meaningful role in the story. “You get to make that choice whether to become the next Baba Yaga,” states Larry Cutler. “From the beginning, we had this idea of giving you a mask that you could decide to put on. For us, we had to make sure whatever you do in that scenario had a satisfying ending, but one that has consequenc­es. We’re telling a story that is about sisterly love and bringing their mother back to life. Involving human characters certainly was difficult from a technical standpoint, and we wanted it to have a nonrealist­ic, magical and storybook feel too. This is definitely by far our most ambitious and largest project; it pushed the level of

“We asked our rigger to put a wave deformer into Baba Yaga’s cape so it could be used as an emotional metre” Ken Fountain, animation supervisor, Baobab Studios

sophistica­tion that we needed to have everywhere.”

KEEPING TRACK

Hand tracking is incorporat­ed into the storytelli­ng of Baba Yaga. Baobab Studios worked closely with Oculus to make sure that what the viewer could do worked well when the hand tracking was turned on. “When Oculus came to us towards the beginning of Baba

Yaga and said, ‘We’re developing hand tracking so we can natively track your hands’, we were so excited because of the idea of it being even more immersive,” states Cutler. “What was cool is hand tracking allows you to track your fingers. In Baba Yaga, you get to befriend these Baby Chompies in the VR version. You can actually poke them and they will track your fingers. The Mama Chompie actually comes right up to you and you can pet her.”

Even the velocity of the hand motions and gestures have an impact on the storytelli­ng. “At the beginning of the project, Eric Darnell said, ‘It would be cool if you could choose what type of witch you are’,” recalls Cutler. “We couldn’t figure out a way to have you acting that as opposed to pressing a button. Eventually, we pitched Eric on this idea of, ‘Wouldn’t it be great if you could wave your hands to decide whether you want to make it a dark or pretty forest?’ A lot of engineerin­g went into making sure that we detected whether you were doing lightning bolts or soft gestures, which was something we learned a lot about on Crow: The Legend. That’s my favourite sequence, just from the pure interactio­n.”

 ??  ?? Baba Yaga is a VR experience in which the actions of the viewer determine the ending of the story
Baba Yaga is a VR experience in which the actions of the viewer determine the ending of the story
 ??  ??
 ??  ??
 ??  ?? Top: Concept art for the prologue, which makes use of a storybook aesthetic
Above: Viewers have the ability to reshape the forest through the velocity of their hand gestures and movements
Top: Concept art for the prologue, which makes use of a storybook aesthetic Above: Viewers have the ability to reshape the forest through the velocity of their hand gestures and movements
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ?? Main: Concept art of the house of Baba Yaga where Magda is held captive
Main: Concept art of the house of Baba Yaga where Magda is held captive
 ??  ??
 ??  ?? Above: The rig created for Magda was as sophistica­ted as one designed for a principal character in an animated feature
Simple gestures such as Magda offering the viewer the opportunit­y to carry the lantern help to create an immersive story
Above: The rig created for Magda was as sophistica­ted as one designed for a principal character in an animated feature Simple gestures such as Magda offering the viewer the opportunit­y to carry the lantern help to create an immersive story
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ?? Above: The clothing of Magda was hand animated to give it a tactile feel
Above: The clothing of Magda was hand animated to give it a tactile feel
 ??  ?? Below: Some early concept designs of Magda
Below: Some early concept designs of Magda
 ??  ??

Newspapers in English

Newspapers from Australia