3D World

ROBOT MOCAP MAYHEM

The VFX behind The Chemical Brothers’ music video

-

I n their always-unique music videos, The Chemical Brothers often tend to take advantage of innovative CG and visual effects techniques. That’s no different in the recent promo for their latest song, Free Yourself, where a mini robot uprising – and dance party – takes place in a storage warehouse.

The promo, directed by Dom&nic and made through production company Outsider, features heavy visual effects by The Mill, which delivered the hordes of robots, some of which have human-like faces. During the action, several robots rip their human masks off and put other human parts on.

For 3D World, The Mill’s visual effects supervisor Sid Harrington-odedra breaks down how motion capture informed the movement of the bots, and what CG techniques helped make the intriguing music video possible.

Planning a robot uprising

An early rough treatment formed the basis of The Mill’s planning for the promo, which in particular was imagined by the directors as involving relatively long shots of between 16 and 60 seconds. “The camera moves were very complex and required very intricate timings and key positions to be hit in the warehouse space, all without being able to use motion control,” notes Harrington­odedra. “Having scouted the location, Printworks in London, we were able to make up a basic digital version of the space as a base for previs. We spent roughly three weeks blocking out these shots to the point where the directors were happy.”

At the shoot, director of photograph­y Alex Barber utilised that previs and technical measuremen­ts of the location undertaken inside Maya to re-create the appropriat­e camera moves. This was done without the use of any physical robots or stand-ins, partly because of the overlap in timing between design and shoot. “The shots done at the location were captured at 4K, 4:3 and at 50 frames per second in order to give us the most amount of flexibilit­y to repair or alter the moves in post-production,” says Harrington-odedra. “All of the lighting reference came from our standard set of chrome/grey balls, Macbeth charts and HDRIS all captured on location.”

Making Mocap

Having so many robots to fill up the frames with, The Mill knew that motion capture would be a desired option. But due to budgetary constraint­s, an optical mocap route, which would likely have required a capture volume, was not considered viable. Instead, the studio chose to use

“The camera moves were very complex and required very intricate Timings and key positions To be hit in The warehouse space, all without being able To use motion control” Sid Harrington-odedra, visual effects supervisor, The Mill

Xsens’ MVN markerless capture system, which uses gyroscopes and accelerome­ters, among other sensors, fitted into skin-tight suits, during a separate shoot. “The gyros feed back sensor data on the orientatio­n of each body part into proprietar­y software from Xsens, which can then reconstruc­t a pose in real time, so we were able to watch performanc­es on set, as they happened,” describes Harrington-odedra.

“We were able to use two suits for the project,” he adds, “although the Xsens software allows you to capture many more than this at once. However, only having two suits meant that we needed to spend a lot of time planning out the more complex shots, so it felt as though there was more broadscale interactio­n between the characters.”

“most of The hero facial animation was keyframed by hand, allowing us The flexibilit­y To respond To feedback To make The movement feel more or less mannequin-like” Sid Harrington-odedra, visual effects supervisor, The Mill

Seeing the performanc­e happen in real time was particular­ly helpful for one of the longest shots in the music video, where the hero character walks up to several pairs of robots and encourages them to dance. “In our previs,” says Harrington-odedra, “the camera was circling around our character while she was hitting key positions in frame at very particular times. In reality, no characters were captured on our principal shoot day, and the Steadicam operator used our previs as a guide for where the action was and how far he had to travel.

“So in the case of this shot, our location shoot happened many weeks before our motion capture shoot. In order to capture the hero performanc­e here, we ended up camera tracking the Steadicam footage and took that to one of our motion capture days. We gathered the distances and timings of the camera move in Maya, then effectivel­y lay out a map on the floor of the studio that our hero dancer would have to follow. Once we had what we thought was a take where the positions and timings were correct, we were able to immediatel­y retarget the captured animation onto our work-inprogress robot animation rig and then see it through the tracked camera, all inside Maya.”

Harrington-odedra believes approachin­g things this way was a great example of how good planning meant the promo could be delivered on time. “We started with previs, used this to determine where our virtual characters are on location during principal photograph­y, and then placed the performers all in the same positions, but on a completely different day!”

Taking it To The next level

On top of the motion capture, artists at The Mill added additional keyframing and also had to deal with body parts like fingers that were not captured by the suit. “Faces weren’t captured either, so they needed to be animated after the fact,” says Harrington-odedra. “Then of course, there was the cleanup of the motion capture itself, which took up most of the time and was expertly handled by lead rigger/animator Matt Kavanagh and his team. Since the motion capture system was based around gyroscopes in a skin-tight suit, the system was liable to muscle-jiggle skewing the results, so this needed to be cleaned out.

Also, occasional­ly, the system would simply yield incorrect orientatio­ns for body parts, which all needed to be cleaned up.”

There were three different robot models that made up the final hordes of metallic creatures: the humanoid male, the humanoid female, and then the slightly less-advanced faceless robot that The Mill lovingly called ‘Bob’. The studio was able to add variation to the robots by switching textures, and also by adding and removing faces and protective parts around the limbs.

“Most of the variation came with the texturing and shading,” states Harrington­odedra. “The lead lighter, Clement Granjon, set up a shader in Houdini that would apply different levels of dirt, dust, scratches, decals, colour schemes and even stickers to the robots. The chosen scheme for the robots would be laid out at the Maya end, and then the attributes would be automatica­lly picked up when rendering with Htoa.”

Personalit­y Plus

Some of the robots, including the hero character, are mannequin-like, and needed to have a certain amount of facial animation. For that, The Mill initially tried an iphone X app called Face Cap, which would use the iphone’s front-facing camera and depth sensor to create an animated digital face.

“The animators were able to retarget onto our rig to use as a base for the final animation, but only a small amount of this made it into the final piece,” says Harrington-odedra. “Most of the hero facial animation was arduously keyframed by hand, as it allowed us the added flexibilit­y to respond to feedback to make the movement feel more or less mannequin-like.”

For the mannequin skin, Harrington­odedra relates that the directors wanted the overall look to have a synthetic quality and not be too much like realistic skin. “Naturally,” he says, “this required a lot of tinkering. There was a key piece of reference that our directors were most keen on, a music video they had previously directed for The Chemical Brothers called Midnight Madness. The video featured a goblin performing some rather acrobatic dancing around Central London.

“The creature was effectivel­y a human in a head mask, and after they had wrapped the video, they had kept hold of the mask itself! They brought it into the office and after studying it, we found that there was quite a high specular component to it, along with some slightly exaggerate­d pore detail. We built these details into our CG model and along with a helping hand from our compositor­s, we landed upon our final look.”

 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ?? opposite (top): various poses of the robots in Free Yourself, which were achieved with motion capture and keyframing opposite (bottom): a final still from the music video, in which the hero mannequin-like robot starts considerin­g a wider uprising below: close-up on the robot eye mechanism
opposite (top): various poses of the robots in Free Yourself, which were achieved with motion capture and keyframing opposite (bottom): a final still from the music video, in which the hero mannequin-like robot starts considerin­g a wider uprising below: close-up on the robot eye mechanism

Newspapers in English

Newspapers from Australia