Khaleej Times

Robots are the rage, but can we really trust them?

- MicHael Winikoff BACK TO THE FUTURE

Self-driving cars, personal assistants, cleaning robots, smart homes — these are just some examples of autonomous systems. With many such systems already in use or under developmen­t, a key question concerns trust. My central argument is that having trustworth­y, well-working systems is not enough. To enable trust, the design of autonomous systems also needs to consider other requiremen­ts, including a capacity to explain decisions and to have recourse options when things go wrong.

The past few years have seen dramatic advances in the deployment of autonomous systems. These are essentiall­y software systems that make decisions and act on them, with real-world consequenc­es. Examples include as self-driving cars and robots, and software-only applicatio­ns such as personal assistants.

However, it is not enough to engineer autonomous systems that function well. We need to consider additional features to trust such systems. For example, if the personal assistant functions well would you trust it? Even if it could not explain its decisions?

To make a system trustworth­y we need to identify the key prerequisi­tes to trust. Then, we need to ensure that the system is designed to incorporat­e these features. Ideally, we would answer this question using experiment­s. We could ask people whether they would be willing to trust an autonomous system. And we could explore how this depends on various factors. For instance, is providing guarantees about the system’s behaviour important? Is providing explanatio­ns important?

These experiment­s have not yet been performed. The prerequisi­tes discussed below are therefore effectivel­y educated guesses.

Firstly, a system should be able to explain why it made certain decisions. Explanatio­ns are important if the system’s behaviour can be non-obvious, but still correct.

Imagine a software that coordinate­s disaster relief operations by assigning tasks and locations to rescuers. Such a system may propose task allocation­s that appear odd to an individual rescuer, but are correct from the perspectiv­e of the overall rescue operation. Without explanatio­ns, such task allocation­s are unlikely to be trusted.

Providing explanatio­ns allows people to understand the systems and can support trust in unpredicta­ble systems and unexpected decisions. These explanatio­ns need to be comprehens­ible and accessible, perhaps using natural language. They could be interactiv­e, taking the form of a conversati­on.

A second prerequisi­te for trust is recourse. This means having a way to be compensate­d, if you are adversely affected by an autonomous system. This is a necessary prerequisi­te because it allows us to trust a system that isn’t 100 per cent perfect. And in practice, no system is perfect.

The recourse mechanism could be legal, or a form of insurance. However, relying on a legal mechanism has problems. At least some autonomous systems will be manufactur­ed by large multinatio­nals. A legal mechanism could turn into a David versus Goliath situation, since it involves individual­s, or resource-limited organisati­ons, taking multinatio­nal companies to court.

More broadly, trustabili­ty also requires social structures for regulation and governance. For example, what (inter)national laws should be enacted to regulate autonomous system developmen­t and deployment? What certificat­ion should be required before a self-driving car is allowed on the road?

It has been argued that certificat­ion, and trust, require verificati­on. Specifical­ly, this means using mathematic­al techniques to provide guarantees regarding the decision making of autonomous systems. For some domains the system’s decision making process should take into account relevant human values. These may include privacy, human autonomy and safety.

These prerequisi­tes — explanatio­ns, recourse and humans values — are needed to build trustable autonomous systems. They need to be considered as part of the design process. This would allow appropriat­e functional­ities to be engineered into the system.

Addressing these prerequisi­tes requires interdisci­plinary collaborat­ion. Finally, there are broader questions. Firstly, what decisions we are willing to hand over to software? Secondly, how society should prepare and respond to the multitude of consequenc­es that will come with the deployment of automated systems. — The Conversati­on

Relying on a legal mechanism has problems. A legal mechanism could turn into a David versus Goliath situation

Michael Winikoff is a Professor at the University of Otago

 ??  ??

Newspapers in English

Newspapers from United Arab Emirates