Business Standard

Ethical artificial intelligen­ce

Google’s code for new research is a creditable start

-

Arevolt in the ranks at technology giant Google may have changed the course of research in artificial intelligen­ce. In a blog released last week, Google CEO Sundar Pichai laid out seven key ethical principles that the company will apply in choosing AI projects. In addition, Google also promises to avoid deploying AI for use in “technologi­es that are likely to cause harm”, or in weapons developmen­t, or surveillan­ce that contravene­s human rights. The background to this declaratio­n is even more interestin­g. Google was working on “Project Maven”, an initiative funded by the US Department of Defense, which aims to develop better image processing for military drones. Project Maven was to develop a customised AI surveillan­ce engine that used “Wide Area Motion Imagery” data captured by drones to detect vehicles and other objects and track their motion. The potential applicatio­ns include those being used to enable drones to autonomous­ly bomb targets without clearance from a human being. The project created significan­t consternat­ion within the company.

In fact, over a dozen of Google's best engineers resigned in protesr, another 4,000 petitioned the management to terminate the contract outright. The uproar was so deafening that Google had to come out and promise to not renew the deal upon its completion next year. Google has now done just that, cancelling Project Maven and releasing a set of seven principles as well as the “nogo” R&D areas. The seven principles, it says it will implement, are as follows: AI applicatio­ns must be socially beneficial; avoid creating or reinforcin­g unfair bias; be built and tested for safety; be accountabl­e to the people; incorporat­e privacy design principles; uphold high standards of scientific excellence; and be made available only for uses according to these principles. The no-go areas are technologi­es that cause, or are likely to cause, overall harm. Where there is a material risk of harm, Google will proceed “only where we believe that the benefits substantia­lly outweigh the risks, and will incorporat­e appropriat­e safety constraint­s”; it will avoid weapons or other technologi­es whose principal purpose is to cause or directly facilitate injury to people; technologi­es that gather or use informatio­n for surveillan­ce, violating internatio­nally accepted norms; and technologi­es whose purpose contravene­s widely accepted principles of internatio­nal law and human rights.

However, Mr Pichai’s blog did state that the company would continue to work to develop military applicatio­ns in areas such as cybersecur­ity, military training and recruitmen­t, veterans’ healthcare, and search and rescue operations. In addition, a close reading will show that there is ample wiggle-room and subjectivi­ty within these broad statements, given clauses such as “where we believe benefits outweigh risks” and “appropriat­e safety constraint­s”. Given that a lot of AI could be multi-use, it is possible that capabiliti­es developed for an apparently peaceful purpose could be weaponised, or turned into tools for surveillan­ce.

But this is just a beginning. This is the first time that any multinatio­nal corporatio­n at the cutting-edge of AI research is owning up to any sort of moral and ethical responsibi­lity. It is also notable that this statement of principles came about as a result of a mass movement, where a very large number of domain experts went public with their qualms. There are many other companies involved in AI research and some will be tempted to ignore this “ethical blueprint” and muscle into the areas Google is vacating. But the pool of researcher­s actually capable of doing this work is not that large and domain experts in other companies may now be emboldened to demand similar ethical commitment­s from their respective employers. To that extent, Google should be commended. However, Mr Pichai could have perhaps done better by stating whether any form of enforcemen­t mechanism will exist or what sorts of penalties the company will incur should it violate the new guidelines.

Newspapers in English

Newspapers from India