The Guardian (USA)

Deepfake detection tools must work with dark skin tones, experts warn

- Hibaq Farah Technology reporter

Detection tools being developed to combat the growing threat of deepfakes – realistic-looking false content – must use training datasets that are inclusive of darker skin tones to avoid bias, experts have warned.

Most deepfake detectors are based on a learning strategy that depends largely on the dataset that is used for its training. It then uses AI to detect signs that may not be clear to the human eye.

This can include monitoring blood flow and heart rate. However, these detection methods do not always work on people with darker skin tones, and if training sets do not contain all ethnicitie­s, accents, genders, ages and skintone, they are open to bias, experts warned.

Over the last couple of years, concerns have been raised by AI and deepfake detection experts who say bias is being built in these systems.

Rijul Gupta, synthetic media expert and co-founder and CEO of DeepMedia, which uses AI and machine learning to assess visual and audio cues for underlying signs of synthetic manipulati­on said: “Datasets are always heavily skewed towards white middleaged men, and this type of technology always negatively affects marginalis­ed communitie­s.”

“At DeepMedia, instead of being race blind, our detectors and our technology actually look for a person’s age, race, gender. So when our detectors are looking to see if the video has been manipulate­d or not, it has already seen a large amount of samples from various ages and races.”

Gupta added that deepfake detection tools that use visual cues, such as blood-flow and heart-rate detection, can have “underlying biases towards people with lighter skin tones, because darker skin tones in a video stream are much harder to extract a heart rate out of”.

The “inherent bias” in these tools means that they will perform worse on minorities.

“We will see an end result of an increase of deepfake scams, fraud and misinforma­tion caused by AI that will be highly targeted and focused on marginalis­ed communitie­s”, Gupta says.

Mutale Nkonde, AI policy adviser and the CEO and founder of AI for the People, said the concerns tap into larger exclusions minorities face.

“If we’re gonna have a technology that is maintainin­g the security of some people it really should maintain the security of all and, unfortunat­ely, the technology isn’t quite there yet,” Nkonde said.

“We are well educated around the issues that facial recognitio­n has in recognisin­g dark skin, but the general public don’t realise that just because the technology has a new name, function or use doesn’t mean that the engineerin­g has advanced.

“It also doesn’t mean that there is no new thinking in the field. And because there is no regulation anywhere in the world that says: ‘You can’t sell a technology that doesn’t work,’ the underlying bias continues and is reproduced in new technologi­es.”

Ellis Monk, professor of sociology at Harvard University and visiting faculty researcher at Google, developed the Monk Skin Tone Scale. It is an alternativ­e scale that is more inclusive than the tech-industry standard and will provide broader spectrum of skin tones than can be used for datasets and machine learning models.

Monk said: “Darker skinned people have been excluded from how these different forms of technology have been developed from the very beginning.

“There needs to be new datasets constructe­d that have more coverage, more representa­tiveness in terms of skin tone and this means you need some kind of a measure that is standardis­ed, consistent and more representa­tive than prior scales.”

 ?? ?? Deepfake detection tools use AI to find signs that are not clear to the human eye, but some methods do not work on darker skin tones. Photograph: Reuters
Deepfake detection tools use AI to find signs that are not clear to the human eye, but some methods do not work on darker skin tones. Photograph: Reuters

Newspapers in English

Newspapers from United States