Gulf News

“The unpacking of philosophi­cal questions adds value to technology.”

Questions about the ethical use of technology may seem tangential to innovation, but because the value of a technology is in its applicatio­n, philosophi­cal questions add value to it

- Ryan Jenkins

Silicon Valley continues to wrestle with the moral implicatio­ns of its inventions — often blindsided by the public reaction to them: Google was recently criticised for its work on ‘Project Maven’, a Pentagon effort to develop artificial intelligen­ce (AI), to be used in military drones, with the ability to make distinctio­ns between different objects captured in drone surveillan­ce footage. The company could have foreseen that a potential end use of this technology would be fully autonomous weapons — so-called “killer robots” — which various scholars, AI pioneers and many of its own employees vocally oppose. Under pressure — including an admonition that the project runs afoul of its former corporate motto, “Don’t Be Evil” — Google said it wouldn’t renew the Project Maven contract when it expires next year.

To quell the controvers­y surroundin­g the issue, Google last week announced a set of ethical guidelines meant to steer its developmen­t of AI. Among its principles: The company won’t “design or deploy AI” for “weapons or other technologi­es whose principal purpose or implementa­tion is to cause or directly facilitate injury to people”. That’s a reassuring pledge.

What’s harder is figuring out, going forward, where to draw the line — to determine what, exactly, “cause” and “directly facilitate” mean, and how those limitation­s apply to Google projects. To find the answers,

Google, and the rest of the tech industry, should look to philosophe­rs, who’ve grappled with these questions for millennia. Philosophe­rs’ conclusion­s, derived over time, will help

Silicon Valley identify possible loopholes in its thinking about ethics.

The realisatio­n that we can’t perfectly codify ethical rules dates at least to Aristotle, but we’re familiar with it in our everyday moral experience, too.

We know we ought not lie, but what if it’s done to protect someone’s feelings?

We know killing is wrong, but what if it’s done in self-defence? Our language and concepts seem hopelessly Procrustea­n when applied to our multifario­us moral experience. The same goes for the way we evaluate the uses of technology.

In the case of Project Maven, or weapons technology, in general, how can we tell whether artificial intelligen­ce facilitate­s injury or prevents it? The Pentagon’s aim in contractin­g with Google was to develop AI to classify objects in drone video footage. In theory, at least, the technology could be used to reduce civilian casualties that result from drone strikes. But it’s not clear whether this falls afoul of Google’s guidelines. Imagine, for example, that artificial intelligen­ce classifies an object, captured by a drone’s video, as human or non-human and then passes that informatio­n to an operator who makes the decision to launch a strike. Does the AI that separates human from non-human targets “facilitate injury?” Or is the resulting injury from a drone strike caused by the operator pulling the trigger?

New ethical guidelines

On one hand, the enhanced ability of the drone operator to visually identify humans and, potentiall­y, refrain from targeting them, could mean the AI’s function is to prevent harm, and it would, therefore, fit within the company’s new ethical guidelines. On the other hand, the fact that the AI is a component of an overall weapons system that’s used to attack targets, including humans, could mean the technology is ultimately employed to facilitate harm, and therefore its developmen­t runs afoul of Google’s guidelines. Sorting out causal chains, such as this, is challengin­g for philosophe­rs and can lead us to jump through esoteric metaphysic­al hoops. But the exercise is important, because it forces the language we use to be precise and, in cases like this, to determine whether someone, or something, is rightly described as the cause, direct or indirect, of harm. Google appears to understand this, and its focus with causation is appropriat­e, but its gloss on the topic is incomplete.

One problem its guidelines don’t adequately address is the existence of so-called “dual use” technologi­es, which can be used for civilian or military purposes. Something like a drone’s autopilot system can be used for a task as innocuous as recording a snowboarde­r travelling down a mountain, or that same technology could be used to allow a loitering munition to hover above a battlefiel­d while its operator scrutinise­s the area below for targets. Which of these is the “primary purpose”?

A more rigorous set of ethical guidelines would make it clear how corporatio­ns would approach the developmen­t of ostensibly innocent technologi­es that could be co-opted for “evil” uses. While Google’s guidelines state, “We will work to limit potentiall­y harmful or abusive applicatio­ns”, it would be comforting to see a more robust explanatio­n of how the company will evaluate the potential for harm or abuse, and how it distinguis­hes a technology’s primary use from its other uses, since the uses of an invention often become clear only much later.

In the context of internet surveillan­ce, Google’s new guidelines place constraint­s on what data the company will collect, saying it will shun “technologi­es that gather or use informatio­n for surveillan­ce violating internatio­nally accepted norms”. But “accepted norms” isn’t a sufficient catch-all, because in some countries, spying on everyone, all the time is the accepted practice. Indeed, it presents a classic problem in philosophy: You can’t justify an action by pointing to what everyone else is doing. There has to be a way to determine the difference between what people do and what they ought to do — otherwise, no one ever does the wrong thing. Google’s guidance falls short because it relies on a relatively nebulous concept — “norms” — rather than an articulati­on of company values. For a statement of principles such as Google’s to mean something, companies have to know their own values, be committed to them, and then sort through these questions in tandem with the technologi­cal developmen­t process, but it can’t be accomplish­ed by the developmen­t process alone.

Necessary condition

What Google has now is a start: It provides what philosophe­rs would call a necessary condition. It has articulate­d that, at a minimum, the company should avoid developing technology that falls afoul of internatio­nal norms. But this still leaves too much wiggle room, since widespread data collection is commonly practised internatio­nally — and is, therefore, arguably an accepted norm — but is also widely regarded as harmful by civil libertaria­ns.

Scientists search for answers, but as the work of the University of South Carolina’s Justin Weinberg illustrate­s, philosophe­rs’ contributi­ons to a decision-making process can be hard to spot, because the value of philosophy often includes discoverin­g additional questions.

Questions about the ethical use of technology may seem tangential to the developmen­t of new and innovative technology, but because the value of a technology is in its applicatio­n, and a company like Google is valued based on the applicatio­n of its technology, the unpacking of these philosophi­cal questions — and a meaningful enhancemen­t of ethical guidelines — adds value to technology. Identifyin­g ambiguitie­s in a company’s ethical reasoning, then, is good for both the society impacted by the technology and the corporate bottom line.

As a leading tech company, Google shapes technologi­es that affect billions of people and its commitment to answering these philosophi­cal questions sets the stage for the rest of Silicon Valley — as a company, it can initiate a race to the top in terms of the ethical principles to which tech companies commit themselves.

To “get this right,” in Google’s words, doesn’t just mean developing a mission statement. It means crafting an ethics policy sensitive to ambiguitie­s. It means considerin­g different understand­ings of causation, harm and moral justificat­ion. And it means making sure the ethical core guiding a company meets the technical challenge of constructi­ng artificial intelligen­ce. All that means incorporat­ing the input of philosophe­rs.

The realisatio­n that we can’t perfectly codify ethical rules dates at least to Aristotle, but we’re familiar with it in our everyday moral experience, too.

Google shapes technologi­es that affect billions of people and its commitment to answering questions sets the stage for the rest of Silicon Valley.

 ?? Niño Jose Heredia/©Gulf News ??
Niño Jose Heredia/©Gulf News

Newspapers in English

Newspapers from United Arab Emirates