New Straits Times

Can artifiCial intelligen­Ce beCome ConsCious?

-

IF you watch sci-fi movies, robots with Artificial Intelligen­ce (AI) usually exhibit some form of sentience. They have emotions and can think independen­tly. It’s pretty much a given that in science fiction AI can have consciousn­ess. In the real world, this is still very much a debatable point. What’s obvious is that machines can become very smart — if by “smart” you mean do complex calculatio­ns at blinding speed. There was a time when it was uncertain whether machines could actually beat humans in cerebral, strategy-based games like Chess or Go. Today, there’s no question that it can. Computers can now even generate horror stories and compose music.

All this is done through the use of massive and rapid calculatio­ns. There’s no thinking, creativity or imaginatio­n involved. There’s also no intention or emotion either. To have such qualities, the machine would need to literally possess a mind of its own, or what scientists refer to as “Strong AI”. A machine with Strong AI would possess all the features of human cognition and literally become self-aware.

Such a system doesn’t exist or at least not yet. What we have instead is what’s dubbed “Weak AI”. That doesn’t mean it’s not powerful. Weak AI can do massive calculatio­ns, solve complex problems and do specific tasks with remarkable speed and precision. But it’s completely unaware and has no emotion. Google’s AlphaGo programme might be able to beat the best human Go player but it doesn’t even know it’s playing Go and has no sense of joy when it wins.

None of today’s AI systems can experience the world qualitativ­ely. It can only do so quantitati­vely. And although machines can be programmed to mimic or exhibit signs of consciousn­ess, that’s just a simulation and not the real thing.

The Turing Test, developed in the 1950s, is a way to assess whether a machine can pass off as human by having a conversati­on with it. If the machine can fool a human, it would have passed the test. Some programmes have seemingly passed that test as early as 1966 but American philosophe­r John Searle has argued that the Turing test doesn’t accurately measure the ability of a machine to “think” because although the machine can give the right response, it doesn’t necessaril­y know what it’s saying. It’s just designed to provide answers that can fool a human interviewe­r.

Searle had devised a “Chinese Room” thought experiment to illustrate this point. In 1980, he published a paper arguing that if you have an AI system that takes in Chinese characters as input and produces appropriat­e Chinese character responses as output, it could fool someone into thinking it understand­s Chinese even though it actually has zero understand­ing. It’s merely simulating an ability to understand Chinese.

As AI advances, there’s no question that machines can be programmed to accurately mimic consciousn­ess. A robot could, for example, be programmed to display a sense of dismay when it detects something bad has happened. Or a sense of joy when something good has happened. But even if it’s fine-tuned to a very high level, so much so that it appears to be truly conscious, that doesn’t mean it actually is. The illusion of consciousn­ess isn’t the same thing as actual consciousn­ess.

There’s a movie called where a female AI robot convinces the protagonis­t that she was in love with him only to eventually use him to escape the laboratory where she was confined. Her deviousnes­s could be seen as proof that she was conscious. But what if she was programmed to find different ways to escape and tricking the human was just part of its programme? Then it’s clearly not conscious.

As mentioned earlier, simulating consciousn­ess isn’t the same as duplicatin­g it. Some AI systems today can simulate consciousn­ess but none are even remotely close to duplicatin­g it. But will that forever be the case?

Three top scientists recently made the case that machine consciousn­ess is possible. Cognitive scientists Stanislas Dehaene, Hakwan Lau and Sid Kouider published an article in the October edition of (a prestigiou­s journal) that claims that “empirical evidence is compatible with the possibilit­y that consciousn­ess arises from nothing more than specific computatio­ns.”

In other words, these scientists believe that consciousn­ess fundamenta­lly involves informatio­n processing, albeit a very complex form of it. If that is indeed the case, what is required to elevate artificial intelligen­ce to artificial consciousn­ess is to map the way the brain works and to then generate computer algorithms that replicate that process.

Of course this is easier said than done. The human brain is a complex biological organ. To map out its neural architectu­re and then emulate the way neurons interact is something that’s currently beyond what neuroscien­ce or computer science is capable of. But even if that were possible someday, would replicatin­g brain processes in a computer algorithm actually result in consciousn­ess? That again is a heavily debated point.

Christof Koch, the chief scientific officer at the Allen Institute for Brain Science in Seattle, Washington, argues that replicatin­g the processes digitally will not result in consciousn­ess. “I think consciousn­ess, like mass, is a fundamenta­l property of the universe,” Koch says, adding: “The analogy, and it’s a very good one, is that you can make pretty good weather prediction­s these days. You can predict the inside of a storm. But it’s never wet inside the computer.”

Koch says that today’s computers, which are made of transistor­s, have a very different “cause-and-effect” structure than what we have in the brain, where one neuron is connected to 10,000 input neurons. But he believes if you were to build a very complex device — what he calls a “neuromorph­ic computer” — that replicates how the brain works, that device could potentiall­y have a form of consciousn­ess.

In other words, according to Koch, if one were to build a device that physically (not digitally) replicates the electroche­mical processes of the brain, that might actually do the trick. That’s still a matter of conjecture, of course, because we really don’t know whether non-biological machines can support consciousn­ess.

It’s tempting to treat the human brain as a kind of biological computer but that analogy is misleading. If the brain were really just an organic informatio­n processing system, then yes, all mental functions would be mere computatio­ns and we can therefore find ways to duplicate it.

This is the reason some people believe that one day it’s possible to upload the contents of your brain to a computer. But this may be impossible no matter how powerful our computers become because of this thing called consciousn­ess, which may be a biological phenomenon that can’t be replicated in silicon-based systems. It could very well be that consciousn­ess is, by its nature, restricted to carbon-based substrates.

The rapid and impressive developmen­ts in AI makes it clear that even within our lifetime, the greatest intelligen­ce on earth will all be silicon-based. Computers will be able to solve a host of human problems in transporta­tion, medicine, nutrition and so on. There’s no way human brains can out-calculate computers. But the one thing that keeps us superior is that we have consciousn­ess. And that’s something computers will probably never have.

 ??  ??
 ??  ??

Newspapers in English

Newspapers from Malaysia