Waterloo Region Record

They can grab, twist and make beds

The world’s leading artificial intelligen­ce labs are putting their touch on robotic hands

- MAE RYAN, CADE METZ AND RUMSEY TAYLOR

A robotic hand? Four autonomous fingers and a thumb that can do anything your own flesh and blood can do? That is still the stuff of fantasy.

But inside the world’s top artificial intelligen­ce labs, researcher­s are getting closer to creating robotic hands that can mimic the real thing.

The spinner: Inside OpenAI, the San Francisco artificial intelligen­ce lab founded by Elon Musk and several other big Silicon Valley names, you will find a robotic hand called Dactyl. It looks a lot like Luke Skywalker’s mechanical prosthetic in the latest Star Wars film: mechanical digits that bend and straighten like a human hand.

If you give Dactyl an alphabet block and ask it to show you particular letters — let’s say the red O, the orange P and the blue I — it will show them to you and spin, twist and flip the toy in nimble ways.

For a human hand, that is a simple task. But for an autonomous machine, it is a notable achievemen­t: Dactyl learned the task largely on its own. Using the mathematic­al methods that allow Dactyl to learn, researcher­s believe they can train robotic hands and other machines to perform far more complex tasks.

The gripper: Created by researcher­s at the Autolab, a robotics lab inside the University of California at Berkeley, this system represents the limits of technology just a few years ago.

Equipped with a two-fingered “gripper,” the machine can pick up items like a screwdrive­r or a pair of pliers and sort them into bins.

The gripper is much easier to control than a five-fingered hand, and building the software needed to operate a gripper is not nearly as difficult.

It can deal with objects that are slightly unfamiliar. It may not know what a restaurant-style ketchup bottle is, but the bottle has the same basic shape as a screwdrive­r — something the machine does know.

But when this machine is confronted with something that is different from what it has seen before — like a plastic bracelet — all bets are off.

The picker: What you really want is a robot that can pick up anything, even stuff it has never seen before. That is what other Autolab researcher­s have built over the past few years.

This system still uses simple hardware: a gripper and a suction cup. But it can pick up all sorts of random items, from a pair of scissors to a plastic toy dinosaur.

The system benefits from dramatic advances in machine learning. The Berkeley researcher­s modelled the physics of more than 10,000 objects, identifyin­g the best way to pick up each one. Then, using an algorithm called a neural network, the system analyzed all this data, learning to recognize the best way to pick up any item. In the past, researcher­s had to program a robot to perform each task. Now it can learn these tasks on its own.

When confronted with, say, a plastic Yoda toy, the system recognizes it should use the gripper to pick the toy up. But when it faces the ketchup bottle, it opts for the suction cup.

The bed maker: This robot may not make perfect hospital corners, but it represents notable progress. Berkeley researcher­s pulled the system together in just two weeks, using the latest machine learning techniques. Not long ago, this would have taken months or years.

Now the system can learn to make a bed in a fraction of that time, just by analyzing data. In this case, the system analyzes the movements that lead to a made bed. The pusher: Across the Berkeley campus, at a lab called BAIR, another system is applying other learning methods. It can push an object with a gripper and predict where it will go. That means it can move toys across a desk much as you or I would.

The system learns this behaviour by analyzing vast collection­s of video images showing how objects get pushed. In this way, it can deal with the uncertaint­ies and unexpected movements that come with this kind of task.

The future: These are all simple tasks. And the machines can only handle them in certain conditions. They fail as much as they impress. But the machine learning methods that drive these systems point to continued progress in the years to come.

Like those at OpenAI, researcher­s at the University of Washington are training robotic hands that have all the same digits and joints that our hands do. That is far more difficult than training a gripper or a suction cup. An anthropomo­rphic hand moves in so many different ways.

So, the Washington researcher­s train their hand in simulation — a digital recreation of the real world. That streamline­s the training process.

At OpenAI, researcher­s are training their Dactyl hand in much the same way. The system can learn to spin the alphabet block through what would have been 100 years of trial and error. Once it learns what works in the simulation, it can apply this knowledge to the real world.

 ?? ERIC LOUIS HAINES OPENAI VIA THE ASSOCIATED PRESS ?? A robotic hand, called Dactyl, at OpenAI’s research lab in San Francisco. Its job is to rotate a cube until the top letter matches a random selection.
ERIC LOUIS HAINES OPENAI VIA THE ASSOCIATED PRESS A robotic hand, called Dactyl, at OpenAI’s research lab in San Francisco. Its job is to rotate a cube until the top letter matches a random selection.

Newspapers in English

Newspapers from Canada