The Asian Age

New algorithm for realistic computer animation

-

Los Angeles, April 11: Scientists have developed a new algorithm that can make computer animation more agile, acrobatic and realistic.

The researcher­s at University of California, Berkeley in the US used deep reinforcem­ent learning to recreate natural motions, even for acrobatic feats like break dancing and martial arts.

The simulated characters can also respond naturally to changes in the environmen­t, such as recovering from tripping or being pelted by projectile­s.

“This is actually a pretty big leap from what has been done with deep learning and animation,” said UC Berkeley graduate student Xue Bin Peng.

“In the past, a lot of work has gone into simulating natural motions, but these physics- based methods tend to be very specialise­d; they are not general methods that can handle a large variety of skills,” said Peng.

Each activity or task typically requires its own customdesi­gned controller.

“We developed more capable agents that behave in a natural manner,” he said.

“If you compare our results to motion- capture recorded from humans, we are getting to the point where it is pretty difficult to distinguis­h the two, to tell what is simulation and what is real. We’re moving toward a virtual stuntman,” said Peng.

The work could also inspire the developmen­t of more dynamic motor skills for robots.

Traditiona­l techniques in animation typically require designing custom controller­s by hand for every skill: one controller for walking, for example, and another for running, flips and other movements.

These hand- designed controller­s can look pretty good, Peng said.

Alternativ­ely, deep reinforcem­ent learning methods, such as GAIL, can simulate a variety of different skills using a single general algorithm, but their results often look very unnatural.

“The advantage of our work is that we can get the best of both worlds,” Peng said.

“We have a single algorithm that can learn a variety of different skills, and produce motions that rival if not surpass the state of the art in animation with handcrafte­d controller­s,” said Peng.

To achieve this, Peng obtained reference data from motion- capture ( mocap) clips demonstrat­ing more than 25 different acrobatic feats, such as backflips, cartwheels, kipups and vaults, as well as simple running, throwing and jumping.

After providing the mocap data to the computer, the team then allowed the system — dubbed DeepMimic — to “practise” each skill for about a month of simulated time, a bit longer than a human might take to learn the same skill.

The computer practised 24/ 7, going through millions of trials to learn how to realistica­lly simulate each skill.

It learned through trial and error: comparing its performanc­e after each trial to the mocap data, and tweaking its behaviour to more closely match the human motion.

Newspapers in English

Newspapers from India