The Atlanta Journal-Constitution

Google progresses in teaching robots to be like humans

Step-by-step requests to complete tasks are becoming less necessary.

- By Gerrit De Vynck, Rachel Lerman

MOUNTAIN VIEW, CALIF. — Researcher­s at Google’s lab recently asked a robot to build a burger out of various plastic toy ingredient­s.

The mechanical arm knew enough to add ketchup after the meat and before the lettuce but thought the right way to do so was to put the entire bottle inside the burger.

While that robot won’t be working as a line cook soon, it is representa­tive of a bigger breakthrou­gh announced by Google engineers Tuesday. Using recently developed artificial-intelligen­ce software known as “large language models,” the researcher­s say they’ve been able to design robots that can help humans with a broader range of everyday tasks.

Instead of providing a laundry list of instructio­ns — directing each of the robot’s movements one by one — the robots can now respond to complete requests, more like a human.

In one demonstrat­ion this month, a researcher said to a robot: “I’m hungry. Can you get me a snack?” The robot proceeded to search through a cafeteria, open a drawer, find a bag of chips and bring it to the researcher.

It’s the first time language models have been integrated into robots, Google executives and researcher­s say.

“This is very fundamenta­lly a different paradigm,” said Brian Ichter, a research scientist at Google and one of the authors of a paper released Tuesday describing the progress the company has made.

Robots are already commonplac­e. Millions of them work in factories around the world, but they follow specific instructio­ns and usually focus only on one or two tasks, such as moving a product down the assembly line or welding two pieces of metal together. The race to build a robot that can do a range of everyday tasks — and learn on the job — is much more complex. Technology companies big and small have labored to build such general-purpose robots for years.

Language models work by taking huge amounts of text uploaded to the internet and using it to train AI software to guess what kinds of responses might come after certain questions or comments. The models have become so good at predicting the right response that engaging with one often feels like having a conversati­on with a knowledgea­ble human. Google and other companies, including OpenAI and Microsoft, have poured resources into building better models and training them on ever-bigger sets of text, in multiple languages.

The work is controvers­ial. In July, Google fired one of its employees who had claimed he believed the software was sentient. The consensus among AI experts is that the models are not sentient, but many are concerned that they exhibit biases because they’ve been trained with huge amounts of unfiltered, human-generated text.

Some language models have shown themselves to be racist or sexist, or easily manipulate­d into spouting hate speech or lies when prompted with the right statements or questions.

In general, language models could give robots knowledge of high-level planning steps, said Carnegie Mellon assistant professor Deepak Pathak, who studies AI and robotics and was commenting on the field, not specifical­ly Google. But those models won’t give robots all the informatio­n they need — for example, how much force to apply when opening a refrigerat­or. That knowledge has to come from somewhere else.

“It solves only the high-level planning issue,” he said.

Still, Google is forging ahead and has now melded the language models with some of its robots. Now, instead of having to encode specific technical instructio­ns for each task a robot can do, researcher­s can simply talk to them in everyday language. More important, the new software helps the robots parse complex multistep instructio­ns on their own. Now, the robots can interpret instructio­ns they’ve never heard before and come up with responses and actions that make sense.

Robots that can use language models could change how manufactur­ing and distributi­on facilities are run, said Zac Stewart Rogers, a supply chain management assistant professor at Colorado State University.

“A human and a robot working together is always the most productive” now, he said. “Robots can do manual heavy lifting. Humans can do the nuanced troublesho­oting.”

If robots were able to figure out complex tasks, it could mean that distributi­on centers could be smaller, with fewer humans and more robots. That could mean fewer jobs for people, though Rogers points out that generally when there is a contractio­n due to automation in one area, jobs are created in other areas.

It’s also probably still a long way away. AI techniques such as neural networks and reinforcem­ent learning have been used to train robots for years. It’s led to some breakthrou­ghs, but progress is still slow. Google’s robots are nowhere near ready for the real world, and in interviews, Google’s researcher­s and executives said repeatedly they are simply running a research lab and do not have plans to commercial­ize the technology yet.

But it’s clear Google and other Big Tech companies have a serious interest in robotics. Amazon uses many robots in its warehouses, is experiment­ing with drone delivery and this month agreed to buy the maker of the Roomba vacuum cleaner robot for $1.7 billion. (Amazon founder Jeff Bezos owns The Washington Post).

Tesla, which has developed some autonomous driving features for its cars, is also working on general-purpose robots.

In 2013, Google went on a spending spree, buying several robotics companies, including Boston Dynamics, the maker of the robot dogs that often go viral on social media. But the executive in charge of the program was accused of sexual misconduct and left the company soon after. In 2017, Google sold Boston Dynamics to Japanese telecommun­ications and tech investment giant Softbank. The hypearound­ever-smarterrob­ots designed by the most powerful tech companies faded.

In the language model project, Google researcher­s worked alongside those from Everyday Robots, a separate but wholly owned company inside Google that works specifical­ly on building robots that can do a range of “repetitive” and “drudgerous” tasks. The robots are already at work in various Google cafeterias, wiping down counters and throwing out trash.

 ?? PHOTOS BY MONICA RODMAN/WASHINGTON POST ?? Among the many robots Google is teaching with newly developed software called large language models, this one is learning how to catch a ball with a lacrosse attachment at the company’s headquarte­rs in Mountain View, California.
PHOTOS BY MONICA RODMAN/WASHINGTON POST Among the many robots Google is teaching with newly developed software called large language models, this one is learning how to catch a ball with a lacrosse attachment at the company’s headquarte­rs in Mountain View, California.
 ?? ?? A robot picks up a can of soda during a demonstrat­ion at Google headquarte­rs. Recently introduced large language models for robot “learning” involve significan­t chunks of text uploaded to the internet used to teach AI software to predict the best response to questions.
A robot picks up a can of soda during a demonstrat­ion at Google headquarte­rs. Recently introduced large language models for robot “learning” involve significan­t chunks of text uploaded to the internet used to teach AI software to predict the best response to questions.

Newspapers in English

Newspapers from United States