The Telegram (St. John's)

Push for AI innovation can create dangerous products

- This piece was first published at The Conversati­on.com. It was authored by David Weitzner, assistant professor, Administra­tive Studies, York University.

TAs it stands, there is a perverse incentive to design AI that is artificial­ly innocent.

his past June, the U.S. National Highway Traffic Safety Administra­tion announced a probe into Tesla’s autopilot software. Data gathered from 16 crashes raised concerns over the possibilit­y that Tesla’s AI may be programmed to quit when a crash is imminent. This way, the car’s driver, not the manufactur­er, would be legally liable at the moment of impact.

It echoes the revelation that Uber’s self-driving car, which hit and killed a woman, detected her six seconds before impact. But the AI was not programmed to recognize pedestrian­s outside of designated crosswalks. Why? Because jaywalkers are not legally there.

Some believe these stories are proof that our concept of liability needs to change. To them, unimpeded continuous innovation and widespread adoption of AI is what our society needs most, which means protecting innovative corporatio­ns from lawsuits. But what if, in fact, it’s our understand­ing of competitio­n that needs to evolve instead?

If AI is central to our future, we need to pay careful attention to the assumption­s around harms and benefits programmed into these products.

As it stands, there is a perverse incentive to design AI that is artificial­ly innocent.

A better approach would involve a more extensive harm-reduction strategy. Maybe we should be encouragin­g industry-wide collaborat­ion on certain classes of life-saving algorithms, designing them for optimal performanc­e rather than proprietar­y advantage.

EVERY FIX CREATES A NEW PROBLEM

Some of the loudest and most powerful corporate voices want us to trust machines to solve complex societal problems. AI is hailed as a potential solution for the problems of cross-cultural communicat­ion, health care and even crime and social unrest.

Corporatio­ns want us to forget that AI innovation­s reflect the biases of the programmer. There is a false belief that as long as the product design pitch passes through internal legal and policy constraint­s, the resulting technology is unlikely to be harmful. But harms emerge in all sorts of unexpected ways, as Uber’s design team learned when their vehicle encountere­d a jaywalker for the first time.

What happens when the nefarious implicatio­ns of an AI are not immediatel­y recognized? Or when it is too difficult to take the AI offline when necessary? Which is what happened when Boeing hesitated to ground the 737 Max jets after a programmin­g glitch was found to cause crashes — and 346 people died as a result.

In 2019, Boeing admitted that its software was the cause of two deadly crashes.

We must constantly reframe technologi­cal discussion­s in moral terms. The work of technology demands discrete, explicit instructio­ns. Wherever there is no specific moral consensus, individual­s simply doing their job will make a call, often without taking the time to consider the full consequenc­es of their actions.

MOVING BEYOND LIABILITY

At most tech companies, a proposal for a product would be reviewed by an in-house legal team. It would draw attention to the policies the design folks need to consider in their programmin­g. These policies might relate to what data is consumed, where the data comes from, what data is stored or how it is used (for example anonymized, aggregated or filtered). The legal team’s primary concern would be liability, not ethics or social perception­s.

Research has called for taking an approach that considers insurance and indemnity (responsibi­lity for loss compensati­on) to shift liability and allow stakeholde­rs to negotiate directly with each other. They also propose moving disputes over algorithms to specialize­d tribunals. But we need bolder thinking to address these challenges.

Instead of liability, a focus on harm reduction would be more helpful. Unfortunat­ely, our current system doesn’t allow companies to easily cooperate or share knowledge, especially when anti-trust concerns might be raised. This has to change.

Newspapers in English

Newspapers from Canada