PC Pro

Cheat Sheet: Ethical AI

Davey Winder explores the moral implicatio­ns of artificial intelligen­ce and how companies are responding to them

-

Davey Winder explores the moral implicatio­ns of artificial intelligen­ce.

Surely it’s too early to be talking about ethical AI?

It’s never too early to take ethical considerat­ions into account when talking about any developing tech. Sure, AI isn’t in Terminator territory just yet but that doesn’t mean that ethical questions haven’t already started to emerge in the algorithml­ed technologi­es that are both in developmen­t and already being used. Everything from the kind of natural language processing used in virtual assistants to facial-recognitio­n systems, self-driving cars and “deep fake” videos. While full-blown Skynet AI isn’t a reality, machine learning has been a tech staple for many years and the results are to be seen all around us at work and in the home.

So what do you mean by the ethics of AI?

The idea of “moral values” or rules based on accepted standards of behaviour is at the heart of the ethical AI conversati­on. It takes the debate around applicatio­ns of AI systems beyond the purely technical and functional, moving towards how these “intelligen­t” algorithms could be used for the common good, or otherwise. How do you prevent pre-existing bias creeping into the machine-learning process? Realworld examples include the Google Photos algorithm that, in 2015, applied the “gorilla” label to images of a Black man. The same year, Amazon found a recruitmen­t algorithm it developed to help hire the best job applicants was penalising women because the data it was learning from was dominated by male candidates.

Where will the responsibi­lity for Artificial Stupidity sit?

This isn’t an easy question to answer. The simple option is to say with the developers of the algorithm. But if a self-driving car hits a pedestrian, is that the fault of the car manufactur­er, the “driver” in the vehicle at the time or the AI involved? What about if we transfer the question into the realm of healthcare? Trusting the decisions made by “intelligen­t” systems will require visibility and explainabi­lity. Developers will need to be able to document machine-learning processes so that outcomes can be explained in context and with clarity.

How are businesses responding?

In 2016, a group of tech firms came together, led by their AI researcher­s, to establish the Partnershi­p on AI to Benefit People and Society with a remit to study and formulate AI best practice. Amazon, Apple, DeepMind, Facebook, Google, IBM and Microsoft promised to educate and advocate for ethical AI. In 2018, however, Google announced it would not renew a contract with the US Pentagon using AI to improve military drone weapon performanc­e and CEO Sundar Pichai published a code of practice guiding objectives for AI applicatio­ns.

But can we trust Big Tech to be ethical when sales are on the line?

Well, many people would say not. What’s more, government­s don’t trust them, either. AI rules are being drafted at a rate of knots, with ethical considerat­ions front and centre.

Take the newly drafted European Commission’s regulation­s that will provide a legal framework for AI use within the EU. These propose the banning systems that are considered a “threat to safety, livelihood­s and rights of people”. The EC digital chief, Margrethe Vestager, said that any AI applicatio­ns that manipulate human behaviours to circumvent free will would be an unacceptab­le risk. And those within high-risk sectors such as education or law enforcemen­t would face additional scrutiny. The regulatory teeth proposed include fines of up to 6% of global turnover. It remains to be seen how these, and other, regulatory frameworks impact real-world outcomes.

 ??  ??

Newspapers in English

Newspapers from United Kingdom