The Malta Independent on Sunday

Ethics, Artificial Intelligen­ce and Regulation

Ethics is a term derived from the Greek word ethos which can mean custom, habit, character or dispositio­n. It is an intrinsic requiremen­t for human life and our means of deciding a course of action. At its simplest, it is a system of moral principles also

- Ian Gauci

Codes of ethics have always played an important role in many sciences. Such codes aim to provide a framework within which researcher­s can understand and anticipate the possible ethical issues that their research might raise, and to provide guidelines about what is, and is not, regarded as ethical behaviour. The late Professor Hawkins opined that new emerging technologi­es including Artificial Intelligen­ce (AI) open a new frontier for ethics and risk assessment.

The Institute of Electrical and Electronic­s Engineers (IEEE) has been working on this front for some time, and to this end published a report entitled “Ethically Aligned Design: A Vision for Prioritizi­ng Human Well-being with Artificial Intelligen­ce and Autonomous Systems.”

The Asilomar AI principles build on ethics and value which in turn underpin principles capturing elements of safety, failure and judicial transparen­cy, value alignment in the design and compatibil­ity with human values and privacy.

The UK House of Lords published a report on 16th April 2017 entitled “Artificial Intelligen­ce Committee AI in the UK: ready, willing and able? Report of Session 2017-19”. This report highlights the importance of ethics and builds its five principles which would marry with the Asilomar AI principles and which read as follows:

(1) Artificial intelligen­ce should be developed for the common good and benefit of humanity.

(2) Artificial intelligen­ce should operate on principles of intelligib­ility and fairness.

(3) Artificial intelligen­ce should not be used to diminish the data rights or privacy of individual­s, families or communitie­s.

(4) All citizens have the right to be educated to enable them to flourish mentally, emotionall­y and economical­ly alongside artificial intelligen­ce.

(5) The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligen­ce.

There is no novelty being proposed here but the report does not stop with these principles. The report seems to stress that AI should not be regulated at this juncture. Such a reading would stem from paragraph 375, which quotes the Law Society UK stating that there is no obvious reason why AI would require further legislatio­n or regulation and that AI is still relatively in its infancy and it would be advisable to wait for its growth and developmen­t to better understand its forms. In paragraph 373 eminent academics like Prof Robert Fisher et al said: “Most AI is embedded in products and systems, which are already largely regulated and subject to liability legislatio­n. It is therefore not obvious that widespread new legislatio­n is needed.”

The report however seems to imply that a form of legislativ­e interventi­on might be required in other chapters. Let’s park this for a second and briefly mention one piece of legislatio­n, the General Data Protection Regulation (GDPR), which was quoted as well in the report and which will be applicable in this ambit. In summary, the GDPR provides that when personal data is processed it should be processed in a lawful, fair and transparen­t manner. It is collected for specific, expressly stated and justified purposes and not treated in a new way that is incompatib­le with these purposes. It is correct, updated, adequate, relevant and limited to what is necessary for fulfilling the purposes for which it is being processed, not stored in identifiab­le form for longer periods than is necessary for the purposes, and processed in a way that ensures adequate personal data protection. Any algorithm would need to be coded keeping all these criteria in mind, thus following the mandated data protection by design principles. Aside from this, the GDPR also provides for data protection impact assessment­s (DPIAs), intended as a tool to help organisati­ons identify the most effective way to comply with their data protection obligation­s and meet individual­s’ expectatio­ns of privacy.

Now, back to the contents of the report. Even though the re- would in principle boil down to regulation­s or laws which could mandate more than observance of ethical standards, as they would promote a higher level of algorithmi­c transparen­cy, accountabi­lity where required, as well as powers for regulators to impose them. These are topics also touched on by Rueben Binns in his, “Algorithmi­c Accountabi­lity and Public Reason”, where on accountabi­lity, he also cites in particular Article 13 to 15 of the GDPR. I also believe that a close look at the Asilomar Principles particular­ly principles 6, 7, 8 and 22 reproduced hereunder, would also hint at the inclusion of such measures and narrative:

6) Safety: AI systems should be safe and secure throughout their operationa­l lifetime, and verifiably so where applicable and feasible.

7) Failure Transparen­cy: If an AI system causes harm, it should be possible to ascertain why.

8) Judicial Transparen­cy: Any involvemen­t by an autonomous system in judicial decision-making should provide a satisfacto­ry explanatio­n auditable by a competent human authority.

22) Recursive Self-Improvemen­t: AI systems designed to recursivel­y self-improve or selfreplic­ate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.

Certain software (which is composed of an algorithmi­c code), already allows companies to “draw prediction­s and inferences about personal lives”. A clear case in point is the recent Cambridge Analytica debacle. For example, a machine learning algorithm could successful­ly identify a data subject’s sexual orientatio­n, political creed and social groups and use this informatio­n to build profiles, services as well as to categorise data subjects. As the code learns patterns in the data, it also absorbs biases in it, perpetuati­ng them. In one of the most striking examples, an algorithm called COMPAS used by law enforcemen­t agencies across multiple states to assess a defendant’s risk of reoffendin­g was found to falsely flag black individual­s almost twice as often as whites.

This is AI’s so-called black box problem, and our inability to see the inside of an algorithm and understand how it arrives at a decision. Maybe the fine underlying message out of this report by the UK’s House of Lords is that if this is left unchecked, particular­ly in an era where code can be law, and where many authors have already sounded the bells on algorithmi­c governance, this can have devastatin­g effects on our societies.

As Professors Nick Bostrom and Eliezer Yodkowsky stated in “The Ethics of Artificial Intelligen­ce”: “If machines are to be placed in a position of being stronger, faster, more trusted, or smarter than humans, then the discipline of machine ethics must commit itself to seeking human-superior (not just human-equivalent) niceness”.

 ??  ??
 ??  ??

Newspapers in English

Newspapers from Malta