Ethics, Artificial Intelligence and Regulation
Ethics is a term derived from the Greek word ethos which can mean custom, habit, character or disposition. It is an intrinsic requirement for human life and our means of deciding a course of action. At its simplest, it is a system of moral principles also
Codes of ethics have always played an important role in many sciences. Such codes aim to provide a framework within which researchers can understand and anticipate the possible ethical issues that their research might raise, and to provide guidelines about what is, and is not, regarded as ethical behaviour. The late Professor Hawkins opined that new emerging technologies including Artificial Intelligence (AI) open a new frontier for ethics and risk assessment.
The Institute of Electrical and Electronics Engineers (IEEE) has been working on this front for some time, and to this end published a report entitled “Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Artificial Intelligence and Autonomous Systems.”
The Asilomar AI principles build on ethics and value which in turn underpin principles capturing elements of safety, failure and judicial transparency, value alignment in the design and compatibility with human values and privacy.
The UK House of Lords published a report on 16th April 2017 entitled “Artificial Intelligence Committee AI in the UK: ready, willing and able? Report of Session 2017-19”. This report highlights the importance of ethics and builds its five principles which would marry with the Asilomar AI principles and which read as follows:
(1) Artificial intelligence should be developed for the common good and benefit of humanity.
(2) Artificial intelligence should operate on principles of intelligibility and fairness.
(3) Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
(4) All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
(5) The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.
There is no novelty being proposed here but the report does not stop with these principles. The report seems to stress that AI should not be regulated at this juncture. Such a reading would stem from paragraph 375, which quotes the Law Society UK stating that there is no obvious reason why AI would require further legislation or regulation and that AI is still relatively in its infancy and it would be advisable to wait for its growth and development to better understand its forms. In paragraph 373 eminent academics like Prof Robert Fisher et al said: “Most AI is embedded in products and systems, which are already largely regulated and subject to liability legislation. It is therefore not obvious that widespread new legislation is needed.”
The report however seems to imply that a form of legislative intervention might be required in other chapters. Let’s park this for a second and briefly mention one piece of legislation, the General Data Protection Regulation (GDPR), which was quoted as well in the report and which will be applicable in this ambit. In summary, the GDPR provides that when personal data is processed it should be processed in a lawful, fair and transparent manner. It is collected for specific, expressly stated and justified purposes and not treated in a new way that is incompatible with these purposes. It is correct, updated, adequate, relevant and limited to what is necessary for fulfilling the purposes for which it is being processed, not stored in identifiable form for longer periods than is necessary for the purposes, and processed in a way that ensures adequate personal data protection. Any algorithm would need to be coded keeping all these criteria in mind, thus following the mandated data protection by design principles. Aside from this, the GDPR also provides for data protection impact assessments (DPIAs), intended as a tool to help organisations identify the most effective way to comply with their data protection obligations and meet individuals’ expectations of privacy.
Now, back to the contents of the report. Even though the re- would in principle boil down to regulations or laws which could mandate more than observance of ethical standards, as they would promote a higher level of algorithmic transparency, accountability where required, as well as powers for regulators to impose them. These are topics also touched on by Rueben Binns in his, “Algorithmic Accountability and Public Reason”, where on accountability, he also cites in particular Article 13 to 15 of the GDPR. I also believe that a close look at the Asilomar Principles particularly principles 6, 7, 8 and 22 reproduced hereunder, would also hint at the inclusion of such measures and narrative:
6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.
8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
22) Recursive Self-Improvement: AI systems designed to recursively self-improve or selfreplicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.
Certain software (which is composed of an algorithmic code), already allows companies to “draw predictions and inferences about personal lives”. A clear case in point is the recent Cambridge Analytica debacle. For example, a machine learning algorithm could successfully identify a data subject’s sexual orientation, political creed and social groups and use this information to build profiles, services as well as to categorise data subjects. As the code learns patterns in the data, it also absorbs biases in it, perpetuating them. In one of the most striking examples, an algorithm called COMPAS used by law enforcement agencies across multiple states to assess a defendant’s risk of reoffending was found to falsely flag black individuals almost twice as often as whites.
This is AI’s so-called black box problem, and our inability to see the inside of an algorithm and understand how it arrives at a decision. Maybe the fine underlying message out of this report by the UK’s House of Lords is that if this is left unchecked, particularly in an era where code can be law, and where many authors have already sounded the bells on algorithmic governance, this can have devastating effects on our societies.
As Professors Nick Bostrom and Eliezer Yodkowsky stated in “The Ethics of Artificial Intelligence”: “If machines are to be placed in a position of being stronger, faster, more trusted, or smarter than humans, then the discipline of machine ethics must commit itself to seeking human-superior (not just human-equivalent) niceness”.