SEED OF DOUBT
Why the lookbefore-youleap practice is critical for machines too.
Why the look-before-you-leap practice is critical for machines, too
Humans experience self doubt more often than they like to admit. It’s thought of as negative and rarely encouraged as it means indecisiveness. Doubt serves an important function though: it makes you question and review an action or decision you are about to take and, hopefully, reduces the chances of a blunder.
If human beings are to work in tandem with intelligent machines, doubt has to be built into them as well. Not only do humans need to be assured that a machine is reviewing possibilities and consequences, but also that it can communicate the degree of uncertainty. “If a self- driving car doesn’t know its level of uncertainty, it can make a fatal error that can be catastrophic,” says Google researcher Dustin Tran, in the MIT Technology Review. That’s why Google and Uber’s AI Lab are working on ‘Probabilistic Programming’ with a whole new language called Pyro. Instead of just responding to data with a yes or no, this type of programming builds in knowledge that includes different probabilities.
The belief of researchers working in this field is that a machine being aware of its own level of uncertainty and being able to communicate it will make the machine or system safer. Machines will also be set up to make mistakes and learn from them before they are out in the wild and in use for everyday life. These issues were discussed at a recent conference on AI in California.
Another framework being used to combine deep learning and probabilistic scenarios is Edward, being developed at the Columbia University. Astonishingly, probabilistic programming isn’t new; Microsoft has been researching it for many years now based on an understanding that technology and AI predictions cannot afford to be deterministic and must have grey areas built in. To quote an example of a simple demonstration from a researcher in the company, if there is an intelligent application trying to recommend movies for a person to watch, it might display say 200 moves. It has knowledge of previously seen films that are liked by the person. If that is all it knows, it will make recommendations based on similar ones. But as you feed it more information such as what movies are disliked, which actors are liked, what genre, inputs from friends, etc, the bank of knowledge improves and recommendations can improve, though uncertainty increases.
With AI becoming a part of many of facets of life, the complexity and subtlety of giving the system scenarios to consider increases exponentially as it involves pattern recognition, probabilistic reasoning and computational learning. It’s a nascent science, but an important one.
A MACHINE BEING AWARE OF ITS OWN LEVEL OF UNCERTAINTY AND BEING ABLE TO COMMUNICATE IT WILL MAKE THE SYSTEM SAFER