MIT’s ComText lets robots followvoice commands
natural language to learn about an object’s size, shape, position, type and even if it belongs to somebody. From this knowledge base, it can then reason, infer meaning and respond to commands.
“The main contribution is this idea that robots should have different kinds of memory, just like people,” said Andrei Barbu, the project’s co-lead.
With ComText, Baxter was successful in executing the right command about 90 per cent of the time.
In the future, the team hopes to enable robots to understand more complicated information, such as multistep commands, the intent of actions, and using properties about objects to interact with them more naturally.
By creating much less constrained interactions, this line of research could enable better communications for a range of robotic systems, from self-driving cars to household helpers.
“This work is a nice step towards building robots that can interact much more naturally with people,” said Luke Zettlemoyer, an associate professor at the University of Washington in the US, who was not involved in the research.
“In particular, it will help robots better understand the names that are used to identify objects in the world, and interpret instructions that use those names to better do what users ask,” Zettlemoyer said.