Cheat Sheet: Ethical AI
Davey Winder explores the moral implications of artificial intelligence and how companies are responding to them
Davey Winder explores the moral implications of artificial intelligence.
Surely it’s too early to be talking about ethical AI?
It’s never too early to take ethical considerations into account when talking about any developing tech. Sure, AI isn’t in Terminator territory just yet but that doesn’t mean that ethical questions haven’t already started to emerge in the algorithmled technologies that are both in development and already being used. Everything from the kind of natural language processing used in virtual assistants to facial-recognition systems, self-driving cars and “deep fake” videos. While full-blown Skynet AI isn’t a reality, machine learning has been a tech staple for many years and the results are to be seen all around us at work and in the home.
So what do you mean by the ethics of AI?
The idea of “moral values” or rules based on accepted standards of behaviour is at the heart of the ethical AI conversation. It takes the debate around applications of AI systems beyond the purely technical and functional, moving towards how these “intelligent” algorithms could be used for the common good, or otherwise. How do you prevent pre-existing bias creeping into the machine-learning process? Realworld examples include the Google Photos algorithm that, in 2015, applied the “gorilla” label to images of a Black man. The same year, Amazon found a recruitment algorithm it developed to help hire the best job applicants was penalising women because the data it was learning from was dominated by male candidates.
Where will the responsibility for Artificial Stupidity sit?
This isn’t an easy question to answer. The simple option is to say with the developers of the algorithm. But if a self-driving car hits a pedestrian, is that the fault of the car manufacturer, the “driver” in the vehicle at the time or the AI involved? What about if we transfer the question into the realm of healthcare? Trusting the decisions made by “intelligent” systems will require visibility and explainability. Developers will need to be able to document machine-learning processes so that outcomes can be explained in context and with clarity.
How are businesses responding?
In 2016, a group of tech firms came together, led by their AI researchers, to establish the Partnership on AI to Benefit People and Society with a remit to study and formulate AI best practice. Amazon, Apple, DeepMind, Facebook, Google, IBM and Microsoft promised to educate and advocate for ethical AI. In 2018, however, Google announced it would not renew a contract with the US Pentagon using AI to improve military drone weapon performance and CEO Sundar Pichai published a code of practice guiding objectives for AI applications.
But can we trust Big Tech to be ethical when sales are on the line?
Well, many people would say not. What’s more, governments don’t trust them, either. AI rules are being drafted at a rate of knots, with ethical considerations front and centre.
Take the newly drafted European Commission’s regulations that will provide a legal framework for AI use within the EU. These propose the banning systems that are considered a “threat to safety, livelihoods and rights of people”. The EC digital chief, Margrethe Vestager, said that any AI applications that manipulate human behaviours to circumvent free will would be an unacceptable risk. And those within high-risk sectors such as education or law enforcement would face additional scrutiny. The regulatory teeth proposed include fines of up to 6% of global turnover. It remains to be seen how these, and other, regulatory frameworks impact real-world outcomes.