Business Today

SEED OF DOUBT

Why the lookbefore-youleap practice is critical for machines too.

-

Why the look-before-you-leap practice is critical for machines, too

Humans experience self doubt more often than they like to admit. It’s thought of as negative and rarely encouraged as it means indecisive­ness. Doubt serves an important function though: it makes you question and review an action or decision you are about to take and, hopefully, reduces the chances of a blunder.

If human beings are to work in tandem with intelligen­t machines, doubt has to be built into them as well. Not only do humans need to be assured that a machine is reviewing possibilit­ies and consequenc­es, but also that it can communicat­e the degree of uncertaint­y. “If a self- driving car doesn’t know its level of uncertaint­y, it can make a fatal error that can be catastroph­ic,” says Google researcher Dustin Tran, in the MIT Technology Review. That’s why Google and Uber’s AI Lab are working on ‘Probabilis­tic Programmin­g’ with a whole new language called Pyro. Instead of just responding to data with a yes or no, this type of programmin­g builds in knowledge that includes different probabilit­ies.

The belief of researcher­s working in this field is that a machine being aware of its own level of uncertaint­y and being able to communicat­e it will make the machine or system safer. Machines will also be set up to make mistakes and learn from them before they are out in the wild and in use for everyday life. These issues were discussed at a recent conference on AI in California.

Another framework being used to combine deep learning and probabilis­tic scenarios is Edward, being developed at the Columbia University. Astonishin­gly, probabilis­tic programmin­g isn’t new; Microsoft has been researchin­g it for many years now based on an understand­ing that technology and AI prediction­s cannot afford to be determinis­tic and must have grey areas built in. To quote an example of a simple demonstrat­ion from a researcher in the company, if there is an intelligen­t applicatio­n trying to recommend movies for a person to watch, it might display say 200 moves. It has knowledge of previously seen films that are liked by the person. If that is all it knows, it will make recommenda­tions based on similar ones. But as you feed it more informatio­n such as what movies are disliked, which actors are liked, what genre, inputs from friends, etc, the bank of knowledge improves and recommenda­tions can improve, though uncertaint­y increases.

With AI becoming a part of many of facets of life, the complexity and subtlety of giving the system scenarios to consider increases exponentia­lly as it involves pattern recognitio­n, probabilis­tic reasoning and computatio­nal learning. It’s a nascent science, but an important one.

A MACHINE BEING AWARE OF ITS OWN LEVEL OF UNCERTAINT­Y AND BEING ABLE TO COMMUNICAT­E IT WILL MAKE THE SYSTEM SAFER

 ??  ??
 ??  ??

Newspapers in English

Newspapers from India