The intel is artificial, consequences are not
In a modern company like Amazon, almost all human activity is directed by computer programs. They not only monitor workers’ actions but are used to choose who should be employed. Yet it emerged last week that the company had scrapped an attempt to use artificial intelligence to select workers on the basis of their CVs, since the results consistently discriminated against women.
This is a welcome decision that illuminates important facts about AI. The technical or operational point is that these programs, no matter how fast they learn, can only learn from the data presented to them. If this data reflects historic patterns of discrimination, the results will perpetuate those patterns. AI is already all around us and is always a hybrid or symbiotic system, made up of the humans who tend the programs and feed them data quite as much as the computers themselves. Companies such as Google or Amazon – and even traditional media and retailers – are now partly constituted by the operations of their computer systems. It is therefore essential that moral and legal responsibility be attached to the human parts of the system. We hold Facebook or Google responsible for the results of their algorithms. The example of Amazon shows this principle must be more widely extended. The companies, the people, and the governments who use AI must be accountable for the consequences.