Maximum PC

RIDDING AI OF HARMFUL BIAS

-

The UK’s Arts and Humanities Research Council (AHRC) has launched the Enabling a Responsibl­e AI Ecosystem program to challenge some of the ethical questions posed by AI. Samantha McGregor, head of AI and design at AHRC, explains why the work is important. “AI presents ethical challenges and biases that differ depending on factors, such as the data it is trained on, the team developing it, the settings it is applied in, and who is using it,” she says. “The program has been designed to ensure that the developmen­t and applicatio­n of AI is responsibl­e, accountabl­e, and ethical by default.”

Matthew Guzdial, a professor at the University of Alberta, urges caution about the bias present in AI systems. “AI models are trained on data that captures the biases of the current moment and replicates those biases at scale. Not only harmful issues like racism and sexism, but biases in terms of compositio­n, uses of color and textures, and so on. Ensuring this doesn’t make it difficult to progress is an open problem.”

George King from the Ada Lovelace Institute believes long-standing ethics practices need to change. “AI examining human history raises exciting opportunit­ies that researcher­s must learn to manage,” he says. “In most corporate and academic institutio­ns, research ethics committees (RECs) are insufficie­nt for the challenges presented by AI. REC reviews are done before AI research, but the risks involved may only become apparent later in the cycle,” he says. “Other risks can relate to engaging in extractive research practices in relation to indigenous communitie­s, or unethical labor practices for third-party data labelers who clean the data to power AI systems. If AI research is to be done safely, such risks must be addressed and accounted for.”

Newspapers in English

Newspapers from United States