New Straits Times

TEACHING ROBOTS TO IMITATE HUMANS

Researcher­s are creating artificial intelligen­ce to enable machines to learn tasks on their own, writes CADE METZ

-

DURING a recent speech at the University of California, Berkeley, Pieter Abbeel played a video clip of a robot doing housework. In the clip recorded in 2008, the robot swept the floor, dusted the cabinets and unloaded the dishwasher. At the end of it all, it even opened a bottled drink and handed it to a guy on a couch.

The trick was that an engineer was operating the robot from afar, dictating its every move. But, as Abbeel explained, the video showed that robotic hardware was nimble enough to mimic complex human behaviour. It just needed software that could guide the hardware — without the help of that engineer.

“This is largely a computer science problem, an artificial intelligen­ce problem,” Abbeel said. “We have the hardware that can do the job.”

Abbeel, a native of Belgium, has spent the last several years working on artificial intelligen­ce, first as a Berkeley professor and then as a researcher at OpenAI, the lab founded by Tesla chief executive Elon Musk and other big Silicon Valley names. Now, he and three fellow researcher­s from Berkeley and OpenAI are starting their own company, intent on bringing a new level of robotic automation to the world’s factories, warehouses and, perhaps, even homes.

Their startup, Embodied Intelligen­ce, is backed by US$7 million (RM29.6 million) in funding from Silicon Valley venture capital firm Amplify Partners and other investors. The company will specialise in complex algorithms that allow machines to learn tasks on their own.

Using these methods, robots could learn to, for example, install car parts that are not quite like the parts they have installed in the past, sort through a bucket of random holiday gifts as they arrive at a warehouse, or perform other tasks that machines traditiona­lly could not.

“We now have teachable robots,” Abbeel said during a recent interview at the new company’s offices in Emeryville, just across the bay from San Francisco.

The new company is part of a much wider effort to create AI that allows robots to learn. Researcher­s in places like Google, Brown University and Carnegie Mellon are doing similar work, as are startups like Micropsi and Prowler.io.

Robots already automate some work inside factories and warehouses, such as moving boxes from place to place at Amazon’s massive distributi­on centres. But, companies must programme these machines for each particular task, limiting their possible applicatio­ns. The hope is that robots can master a much wider array of tasks by learning on their own.

“Today, every motion that an industrial robot makes is specified down to the millimetre,” said Amplify founder Sunil Dhaliwal, who led the firm’s investment in Embodied Intelligen­ce.

“But, most real problems can’t be solved that way. You have to be able not just to tell the robot what to do, but to tell it how to learn.”

Abbeel and the other founders of Embodied Intelligen­ce, including the former OpenAI researcher­s Peter Chen and Rocky Duan and the former Microsoft researcher Tianhao Zhang, specialise in an algorithmi­c method called reinforcem­ent learning — a way for machines to learn tasks by extreme trial and error.

Researcher­s at DeepMind, the London-based AI lab owned by Google, used this method to build a machine that could play the ancient game of Go better than any human. In essence, the machine learned to master this enormously complex game by playing against itself, over and over and over again.

Other researcher­s, across both industry and academia, have shown that similar algorithms allow robots to learn physical tasks as well. By repeatedly trying to open a door, for instance, a robot can learn which movements bring success and which do not.

Much like Google and labs at Brown and Northeaste­rn University, Embodied Intelligen­ce is also augmenting these methods with a wide range of other machine-learning techniques. Most notably, the startup is exploring what is called imitation learning, a way for machines to learn discrete tasks from human demonstrat­ions.

The company is using this method to teach a two-armed robot to pick up plastic pipes from a table. Donning virtual reality headsets and holding motion trackers in their hands, Abbeel and his colleagues will repeatedly demonstrat­e the task in a digital world that recreates what is in front of a robot. Then the machine can learn from this digital data.

“We collect data on what the human is doing,” Chen said. “Then we can train the machine to imitate the human.” NYT

The hope is that robots can master a much wider array of tasks by learning on their own.

 ?? AFP PIC ?? Robots already automate some work inside factories and warehouses. But, companies must programme these machines for each task, limiting their possible applicatio­ns.
AFP PIC Robots already automate some work inside factories and warehouses. But, companies must programme these machines for each task, limiting their possible applicatio­ns.

Newspapers in English

Newspapers from Malaysia