The Malta Independent on Sunday

The Skynet Paradox

“The unknown future rolls toward us. I face it, for the first time, with a sense of hope because if a machine, a Terminator, can learn the value of human life, maybe we can too.” (Sarah Connor in the Terminator 2: Judgement Day.

- Ian Gauci

Skynet is a central theme and a revolution­ary artificial intelligen­ce system built by Cyberdyne Systems in the Terminator movies. Accidently, when we first think of artificial intelligen­ce the first port of call is science fiction and sentient beings or robots. Even the famous Law of Robotics is the result of Runaround, one of Asimov’s novels in the collection ‘IRobot’.

But what is artificial intelligen­ce (AI) exactly? Its origins can be traced back to the 18th century, from Thomas Bayes and George Boole to Charles Babbage, who constructe­d the first electronic computer. In his classic essay Computing Machinery and Intelligen­ce, Alan Turing also imagined the possibilit­y of computers created for simulating intelligen­ce. In 1956, John McCarthy coined the definition for AI as ‘the science and engineerin­g of making intelligen­t machines’.

Despite this definition, we are still somewhat at a loss to bring out a clear and harmonised taxonomy of two simple words umbilicall­y linked in AI.

The first limb, ‘artificial’, can mean something not occurring in nature or not occurring in the same form in nature. If we pause a little here and think of the current and envisaged advancemen­ts of 3D printing to construct human organs, technologi­es like Crisper and the advancemen­t of science, this definition is a conundrum. The artificial is no longer linked to a programmin­g output or a synthetic material, and without delving into the moral and ethical dilemmas, this is slowly but gradually blurring the legacy concept of what is natural and what occurs in the same form in nature.

This leads us to the second limb: intelligen­ce. From a philosophi­cal perspectiv­e, ‘intelligen­ce’ is a vast minefield, especially if treated as includ- ing one or more of ‘consciousn­ess’, ‘thought’, ‘free will’ and ‘mind’. Philosophe­r Emmanuel Kant argued that the mind brings to experience certain qualities of its own that order it. These are the 12 a priori (deductive) categories of causality, unity, totality and the like, and the a priori intuitions of time and space. He also considered psychology to be an empirical inquiry into the laws of mental operations, where human intelligen­ce is also visualised as a compendium of abilities.

Nick Bostrom, Professor of Philosophy at Oxford University and director of the Future of Humanity Institute, whilst categorisi­ng this intelligen­ce in AI as general and super intelligen­ce, also questions how we should act – mindful of the knowledge that we will eventually live alongside artificial minds exponentia­lly more powerful than our own. The late Stephen Hawking also acknowledg­ed this and was very vocal on the effect of this intelligen­ce on human kind.

Philosophe­r John Smart thinks that, given their processing capacity, AIs would be “vastly more responsibl­e, regulated, and self-restrained than human beings” and that “if morality and immunity are developmen­tal processes, if they arise inevitably in all intelligen­t collective­s as a type of positivesu­m game, they must also grow in force and extent as each civilisati­on’s computatio­nal capacity grows”.

To my mind, the correlatio­n between the definition­s of ‘Artificial’ and ‘Intelligen­ce’, and the perceived outcomes, leave more questions than answers. What exactly would fall within this definition? Would this imply that AI can be self-sufficient and autonomous from the will of its creator? Can it thus act independen­tly and be attributed rights, obligation­s and liabilitie­s over such autonomous actions? Should it therefore also have moral status and/or legal personalit­y?

In February of last year, the EU Parliament, with an unpreceden­ted show of support, also took an initial step to- wards enacting the world’s first Robot laws, where it was also suggested that sophistica­ted autonomous robots be given specific legal status as electronic persons. Saudi Arabia also granted citizenshi­p to a robot, “Sophia”. Estonia was also mulling over a decision to grant AI and robots a legal status that is somewhere between ‘separate legal personalit­y’ and ‘personal property’ called ‘robot-agent’. This could potentiall­y allow AI to own their creations, as well as being liable for any damage, and introduce the concept of lex machina criminalis. However, this also highlights social, ethical and legal concerns that need a more profound analysis.

We might need some more time to understand the implicatio­ns of such measures and maybe plan a more transparen­t and step-by-step transition to the reality that AI will bring about. The AI community, mindful of the dangers and uncertaint­ies in this quadrant and the inefficacy of the science fiction norms created by Asimov, is building on the latter and working on the Asilomar AI Principles. These principles are aimed for the safe creation, use and existence of AI and will include, amongst others: Transparen­cy (ascertaini­ng the cause if an AI system causes harm); Value Alignment (aligning the AI system’s goals with human values); and Recursive Self-Improvemen­t (subjecting AI systems with abilities to self-replicate to strict safety and control measures). As George Puttenham’s The Arte of English Poesie (1589) puts it:

‘Ye haue another manner of disordered speach, when ye misplace your words or clauses and set that before which should be behind. We call it in English prouerbe, the cart before the horse, the Greeks call it Histeron proteron, we name it the Prepostero­us’.

 ??  ??
 ??  ??

Newspapers in English

Newspapers from Malta