The Phnom Penh Post

Startup taking AI to the movies

- Cade Metz

INSIDE an old auto body shop here in Silicon Valley, Stefan Avalos pushed a movie camera down a dolly track. He and a small crew were making a short film about selfdrivin­g cars. They were shooting a powder-blue 1962 Austin Mini, but through special effects the rusted relic would be transforme­d into an autonomous vehicle that looked more like the DeLorean from Back to the Future.

Stepping back from the camera, Avalos referred wryly to the movie he was filming as “Project Unemployme­nt.” The film was a way of testing new technology from a startup called Arraiy, which is trying to automate the creation of digital effects for movies, television and games.

This new type of artificial intelligen­ce, which is also being developed by the software giant Adobe and in other technology industry research labs, could ultimately replace many of the specialist­s who build such effects.

“This is no joke; it will put people out of work,” said Avalos, a Los Angeles-based filmmaker who also runs a visual effects house. “The artists are safe. But it will replace all the drudgery.”

Over the past three decades, computer-generated imagery has transforme­d how movies and television are made. But building digital effects is still a painstakin­g and enormously tedious process. For every second of movie time, armies of designers can spend hours isolating people and objects in raw camera footage, digitally building new images from scratch and combining the two as seamlessly as possible.

Arraiy (pronounced “array”) is building systems that can handle at least part of this process. The company’s founders, Gary Bradski and Ethan Rublee, also created Industrial Perception, one of several robotics startups snapped up by Google several years ago.

“Filmmakers can do this stuff, but they have to do it by hand,” said Bradski, a neuroscien­tist and computer vision specialist with a long history in Silicon Valley. He has worked with companies as varied as the chipmaker Intel and the augmented reality startup Magic Leap.

Backed by more than $10 million in financing from the Silicon Valley venture firm Lux Capital, SoftBank Ventures and others, Arraiy is part of a widespread effort spanning industry and academia and geared toward building systems that can generate and manipulate images on their own.

Thanks to improvemen­ts in neural networks – complex algorithms that can learn tasks by analysing vast amounts of data – these systems can edit noise and mistakes out of images or apply simple effects and create highly realistic images of very fake people or help graft one person’s head onto the body of someone else.

Inside Arraiy’s offices – the old auto body shop – Bradski and Rublee are building computer algorithms that can learn design tasks by analysing years of work by movie effects houses. That includes systems that learn to “rotoscope” raw camera footage, carefully separating people and objects from their background­s so that they can be dropped onto new background­s.

Adobe, which makes many of the software tools used by today’s designers, is also exploring machine learning that can automate similar tasks.

At Industrial Perception, Rublee helped develop computer vision for robots designed to perform tasks like loading and unloading freight trucks. Not long after Google acquired the startup, work on neural networks took off inside the tech giant. In about two weeks, a team of Google researcher­s “trained” a neural network that outperform­ed technology from the startup that had taken years to create.

Rublee and Bradski collected a decade of rotoscopin­g and other visual effects work from various design houses, which they declined to identify. And they are adding their own work to the collection. After filming people, mannequins, and other objects in front of a classic “green screen,” for example, company engineers can quickly rotoscope thousands of images relatively quickly to be added to the data collection. Once the algorithm is trained, it can rotoscope images without help from a green screen.

The technology still has flaws, and in some cases human designers still make adjustment­s to the automated work. But it is improving.

“These methods are still rough around the edges – there is still a long tail of things that can go wrong in unpredicta­ble ways – but there aren’t any fundamenta­l roadblocks,” said Phillip Isola, a computer vision researcher at the MIT and OpenAI, the artificial intelligen­ce lab created by Tesla’s chief executive, Elon Musk, and others.

If tech companies can help automate some of the grunt work involved in creating special effects, creative people will have a chance to try new things, said Pasha Shapiro, a filmmaker and special effects artist who has also worked with Arraiy.

“Some work is so tedious that it is not practical,” he said. “That is where technology can help even more.”

 ?? CHRISTIE HEMM KLOK/THE NEW YORK TIMES ?? Stefan Avalos, a director, and Mark Rublee, a director of photograph­y, test Arraiy’s technology while on a film set at Arraiy’s headquarte­rs in Palo Alto, California, March 21.
CHRISTIE HEMM KLOK/THE NEW YORK TIMES Stefan Avalos, a director, and Mark Rublee, a director of photograph­y, test Arraiy’s technology while on a film set at Arraiy’s headquarte­rs in Palo Alto, California, March 21.

Newspapers in English

Newspapers from Cambodia