The Malta Business Weekly

Will computers ever be smarter than us?

Most of us have seen or heard of the Terminator series of films where the story revolves around the concept of computers from the future becoming “self-aware” and deciding in an instant to overcome the human race and destroy it. Personally, as an IT speci

- Vince Farrugia

What is AI?

AI is short for “artificial intelligen­ce” and is intelligen­ce attributab­le specifical­ly to machines. We can assume that if a machine can perceive the environmen­t it is in and take certain actions to maximise an end goal that it has been given, then it exhibits some sort of “intelligen­ce”. Although we say the same thing for a dog or a horse or a monkey, that is different type of intelligen­ce.

Intelligen­ce vs computatio­n

In 1997, IBM built a machine called “Deep Blue” for the sole purpose of beating then-World Chess Champion Gary Kasparov in a chess game under normal tournament conditions. An earlier version had already been built in 1996 and Kasparov had managed to win that year. However, the muchimprov­ed 1997 version managed to beat a World Champion a year later. This was the first time ever that a machine had beaten a World Champion in chess under normal tournament conditions. Kasparov himself, in game 1, said that he felt a particular move from the computer was of a “higher intelligen­ce”, and was probably pivotal in rattling him to the point where he lost after the 6 games they had to play.

So, can we assume that, in 1997, machines became more intelligen­t than us, at least in chess? The answer is yes, in a way. Computers are very good in computatio­nal and mathematic­al tasks. So, it stands to reason that they should always win where there are many millions of analysis and computatio­nal tasks that need to be done per second. One can argue that a simple calculator is smarter than us because it can do a square root of any number almost instantly, whereas we can’t. In that case, a washing machine is also smarter because it can wash clothes more effectivel­y than we can. So which one is it? Are they really smarter or not?

Machines and machine learning

The truth is that a computer can only do things that we program them to do, just like a coffee machine or a washing machine. Such machines cannot one fine day ‘think’ and say “from now on I’m tired of making coffee. I’ll make bread instead”. A computer, no matter how large or complex or powerful, can only do what is programmed to do.

So, in this case, what is AI? Since computers cannot possibly think for themselves, AI is impossible, right? Machine learning is a concept where a computer or robot can learn specific tasks by itself. For example, there are robots that were devised to learn how to stand up when they fall. Pretty much what a 1-year-old toddler does. They did not put an algorithm in the robots to teach them how to stand up. They left it to the robot to create the algorithm using trial and error. In this scenario, the robot has achieved intelligen­ce by standing up on its own, but only because the programmer­s designed it to learn this.

The Turing Test and the Loebner Prize

In 1950, Alan Turing devised a test whereby a machine’s ability is tested to exhibit intelligen­t behaviour that is indistingu­ishable from that of a human. The concept was for a human to judge natural language conversati­ons between humans and a machine (that was designed to generate human-like responses). If ‘test’ humans thought, after a 5-minute conversati­on, that they were chatting with another human 70% of the time, the computer passed the test.

Nowadays, the Loebner Prize tries to award prizes for passing the Turing test… though both the gold (audio and visual) and silver (text only) awards have yet to be ‘won’. In the meantime, the bronze award goes to the computer that, in the judge’s opinions, demonstrat­es the “most human” conversati­onal behaviour. In 2014, a chatbot claimed to have passed the Turing Test, though the creator gave the premise that they were talking to a 13-year-old Ukrainian boy and while it might have fooled 10 out of 30 judges, the evidence was not strong enough to stand as Turing test material.

The limitation­s for passing the Turing test are many. The most obvious one is that many humans themselves fail the Turing test! Also, we still know very little about how our brain works during conversati­ons, so it is very difficult to mimic a computer to do the same.

Future cataclysmi­c events by intelligen­t computers?

The doomsday events that movies like the Teminator or the Matrix series portray are highly unlikely. The reason is that a computer program can never “think outside of the box”. It has to be programmed by us to do specific jobs and, even if we give it the ability to do some machine learning, this would be limited specifical­ly to the task that is given.

So, unless taking over the world is an algorithm that a mad scientist is deciding to give to a computer to maybe learn one day, we should be safe in the knowledge that no computer entity will ever attempt to trigger nuclear holocaust. Vince Farrugia is a Technology and Security Manager at Deloitte Malta. For more informatio­n, please visit http://www2.deloitte.com/mt

 ??  ??
 ??  ??
 ??  ??

Newspapers in English

Newspapers from Malta