RIDDING AI OF HARMFUL BIAS
The UK’s Arts and Humanities Research Council (AHRC) has launched the Enabling a Responsible AI Ecosystem program to challenge some of the ethical questions posed by AI. Samantha McGregor, head of AI and design at AHRC, explains why the work is important. “AI presents ethical challenges and biases that differ depending on factors, such as the data it is trained on, the team developing it, the settings it is applied in, and who is using it,” she says. “The program has been designed to ensure that the development and application of AI is responsible, accountable, and ethical by default.”
Matthew Guzdial, a professor at the University of Alberta, urges caution about the bias present in AI systems. “AI models are trained on data that captures the biases of the current moment and replicates those biases at scale. Not only harmful issues like racism and sexism, but biases in terms of composition, uses of color and textures, and so on. Ensuring this doesn’t make it difficult to progress is an open problem.”
George King from the Ada Lovelace Institute believes long-standing ethics practices need to change. “AI examining human history raises exciting opportunities that researchers must learn to manage,” he says. “In most corporate and academic institutions, research ethics committees (RECs) are insufficient for the challenges presented by AI. REC reviews are done before AI research, but the risks involved may only become apparent later in the cycle,” he says. “Other risks can relate to engaging in extractive research practices in relation to indigenous communities, or unethical labor practices for third-party data labelers who clean the data to power AI systems. If AI research is to be done safely, such risks must be addressed and accounted for.”