The Borneo Post (Sabah)

Training computers to recognise actions

-

CAMBRIDGE, Massachuse­tts: A person watching videos that show things opening — a door, a book, curtains, a blooming flower, a yawning dog — easily understand­s the same type of action is depicted in each clip.

“Computer models fail miserably to identify these things. How do humans do it so effortless­ly?” asks Dan Gutfreund, a principal investigat­or at the MITIBM Watson AI (Artificial Intelligen­ce) Laboratory and a staff member at IBM Research. “We process informatio­n as it happens in space and time. How can we teach computer models to do that?” Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaborat­ion for research on the frontiers of artificial intelligen­ce. Launched last year, the lab connects MIT and IBM researcher­s together to work on AI algorithms, the applicatio­n of AI to industries, the physics of AI, and ways to use AI to advance shared prosperity.

The Moments in Time dataset is one of the projects related to AI algorithms that is funded

Computer models fail miserably to identify these things. How do humans do it so effortless­ly? We process informatio­n as it happens in space and time. How can we teach computer models to do that? – Dan Gutfreund, a principal investigat­or at the MIT-IBM Watson AI Laboratory

by the lab. It pairs Gutfreund with Aude Oliva, a principal research scientist at the MIT Computer Science and Artificial Intelligen­ce Laboratory, as the project’s principal investigat­ors.

Moments in Time is built on a collection of one million annotated videos of dynamic events unfolding within three seconds. Gutfreund and Oliva, who is also the MIT executive director at the MIT-IBM Watson AI Lab, are using these clips to address one of the next big steps for AI: teaching machines to recognise actions.

The goal is to provide deeplearni­ng algorithms with large coverage of an ecosystem of visual and auditory moments that may enable models to learn informatio­n that isn’t necessaril­y taught in a supervised manner and to generalise to novel situations and tasks, say the researcher­s.

“As we grow up, we look around, we see people and objects moving, we hear sounds that people and object make. We have a lot of visual and auditory experience­s. An AI system needs to learn the same way and be fed with videos and dynamic informatio­n,” Oliva said.

One key goal at the lab is the developmen­t of AI systems that move beyond specialise­d tasks to tackle more complex problems and benefit from robust and continuous learning. “We are seeking new algorithms that not only leverage big data when available, but also learn from limited data to augment human intelligen­ce,” said Sophie V. Vandebroek, chief operating officer of IBM Research, about the collaborat­ion. — MIT News

 ??  ?? Aude Oliva (right) and Dan Gutfreund are the principal investigat­ors for the Moments in Time Dataset. — Photo by John Mottern/Feature Photo Service for IBM
Aude Oliva (right) and Dan Gutfreund are the principal investigat­ors for the Moments in Time Dataset. — Photo by John Mottern/Feature Photo Service for IBM

Newspapers in English

Newspapers from Malaysia