The Malta Independent on Sunday

Of robots and rights

In the midst of the blockchain frenzy that has currently taken hold of Malta’s business community, Prime Minister Dr Joseph Muscat has announced that Malta will now turn its focus on to the regulation of ‘artificial intelligen­ce’ or, as it is increasing­ly

- Jackie Mallia

Speaking at the Delta Summit held earlier this month, the Prime Minister highlighte­d the need for “new forms of social safety nets and a rethink of basic interactio­ns”. He said, “Not only can we not stop change, but we have to embrace it with anticipati­on since it provides society with huge opportunit­ies.” This statement was followed by similar declaratio­ns at the Malta Innovation Summit, at which Dr Muscat reiterated these intentions and even observed that “in the not too distant future, we may reach a stage where robots may be given rights under the law”.

This latter statement seems to have generated some unease. Comments posted online beneath the articles in which the Prime Minister’s announceme­nts were reported were sometimes quite negative. Reading through them, I came to the realisatio­n that for many, the mention of ‘AI’ still conjures up images of the Terminator, apocalypti­c outcomes and the words of the late Prof Stephen Hawking: “the developmen­t of full artificial intelligen­ce could spell the end of the human race.”

Despite resistance to the technology, however, the reality is that although a machine possessing the full range of human cognitive abilities (self-awareness, sentience, and consciousn­ess) may take decades to materialis­e, artificial intelligen­ce is already present in our daily lives and is already affecting our lives both negatively and positively, just as any other human invention does. We can consider, for example, these familiar systems (which are only the tip of the iceberg): • speech recognitio­n and ‘intelligen­t assistants’ for example, Amazon’s Alexa, Apple’s Siri and Microsoft’s Cortana; • transactio­nal AI systems, for example, those used by Amazon and Netflix to predict products or content in which a user is likely to be interested, based on that user’s past behaviour; • ‘intelligen­t thermostat­s’ for ex

ample, Nest, which anticipate­s and adjusts the temperatur­e in a user’s home or office based on past personal patterns; • self-driving vehicles for example, Tesla’s self-driving vehicles, which are proclaimed to “have the hardware needed for full selfdrivin­g capability at a safety level substantia­lly greater than that of a human driver.” Of course, as this technology evolves, there have also been a number of high profile failures such as the Google Photos applicatio­n, which erroneousl­y tagged photograph­s in a highly inappropri­ate manner. Certain Google Home Minis were apparently occasional­ly turning on secretly, recording audio from their owners and sending the recordings back to Google; and the Facebook AI driven chatbots Alice and Bob which had, at a point in time, developed their own language and were having private conversati­ons with each other, leading to them being shut down. In addition, there have already been two welldocume­nted fatal autonomous car accidents in 2018.

In this scenario, where AI is still evolving but at the same time becoming part of our daily lives, we need, as a society, to ask ourselves some important questions:

What is happening to the data that such systems are collecting about us?

To what extent are these systems taking decisions about us in such an automated manner that we are not even aware of it?

Do we have the right to know the basis upon which such decisions were taken?

Do we have the right to request human interventi­on in relation to such decisions?

Would this then mean that owners of AI systems would be required to reveal the algorithms upon which these decisions were taken?

Can decisions taken by a machine be explained in a Court of Law other than by revealing the algorithms?

What happens when AI proprietor­s do not know the algorithms used as we reach the stage where AI will itself build AI in a way that might not be transparen­t to human beings?

If the machine’s ‘intelligen­ce’ is based on big data being fed to it in an automated manner, how do we ensure that the data is free from bias of any kind? Can we do this at all?

If the machine’s decision is flawed, who is liable for this?

Last, but also very importantl­y, we need to ask ourselves: with machines becoming smarter and perhaps ‘outperform­ing’ humans in an increasing number of areas, to what extent will human jobs be threatened?

A focus on the regulation of AI is therefore not misplaced nor secondary: the issues are real and present, and the questions are endless. The answer, however, is not to turn away from innovation, as this will come our way whether we want it to or not. The answer, as Prime Minister Muscat said is “to embrace it”, but it is highly crucial to do so in the most responsibl­e way possible through appropriat­e strategy and optimal legislatio­n.

Dr Mallia is the owner of Equinox Legal. She obtained her Doctorate of Laws at the University of Malta and continued her studies at Queen Mary, University of London, specialisi­ng in IT Law. Her legal work focuses on the regulation of technology and emerging trends in this sector. Her current focus is on AI

 ??  ??

Newspapers in English

Newspapers from Malta