Ethical artificial intelligence
Google’s code for new research is a creditable start
Arevolt in the ranks at technology giant Google may have changed the course of research in artificial intelligence. In a blog released last week, Google CEO Sundar Pichai laid out seven key ethical principles that the company will apply in choosing AI projects. In addition, Google also promises to avoid deploying AI for use in “technologies that are likely to cause harm”, or in weapons development, or surveillance that contravenes human rights. The background to this declaration is even more interesting. Google was working on “Project Maven”, an initiative funded by the US Department of Defense, which aims to develop better image processing for military drones. Project Maven was to develop a customised AI surveillance engine that used “Wide Area Motion Imagery” data captured by drones to detect vehicles and other objects and track their motion. The potential applications include those being used to enable drones to autonomously bomb targets without clearance from a human being. The project created significant consternation within the company.
In fact, over a dozen of Google's best engineers resigned in protesr, another 4,000 petitioned the management to terminate the contract outright. The uproar was so deafening that Google had to come out and promise to not renew the deal upon its completion next year. Google has now done just that, cancelling Project Maven and releasing a set of seven principles as well as the “nogo” R&D areas. The seven principles, it says it will implement, are as follows: AI applications must be socially beneficial; avoid creating or reinforcing unfair bias; be built and tested for safety; be accountable to the people; incorporate privacy design principles; uphold high standards of scientific excellence; and be made available only for uses according to these principles. The no-go areas are technologies that cause, or are likely to cause, overall harm. Where there is a material risk of harm, Google will proceed “only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints”; it will avoid weapons or other technologies whose principal purpose is to cause or directly facilitate injury to people; technologies that gather or use information for surveillance, violating internationally accepted norms; and technologies whose purpose contravenes widely accepted principles of international law and human rights.
However, Mr Pichai’s blog did state that the company would continue to work to develop military applications in areas such as cybersecurity, military training and recruitment, veterans’ healthcare, and search and rescue operations. In addition, a close reading will show that there is ample wiggle-room and subjectivity within these broad statements, given clauses such as “where we believe benefits outweigh risks” and “appropriate safety constraints”. Given that a lot of AI could be multi-use, it is possible that capabilities developed for an apparently peaceful purpose could be weaponised, or turned into tools for surveillance.
But this is just a beginning. This is the first time that any multinational corporation at the cutting-edge of AI research is owning up to any sort of moral and ethical responsibility. It is also notable that this statement of principles came about as a result of a mass movement, where a very large number of domain experts went public with their qualms. There are many other companies involved in AI research and some will be tempted to ignore this “ethical blueprint” and muscle into the areas Google is vacating. But the pool of researchers actually capable of doing this work is not that large and domain experts in other companies may now be emboldened to demand similar ethical commitments from their respective employers. To that extent, Google should be commended. However, Mr Pichai could have perhaps done better by stating whether any form of enforcement mechanism will exist or what sorts of penalties the company will incur should it violate the new guidelines.