What Googledygook!
Tech firm’s AI gets its history all wrong in ‘diversity drive’
IF you were presented with an image of a female Pope or a black Viking, you might think you had landed in an alternative universe.
But these pictures were actually created by Google’s AI image generator, in what appears to be a botched diversity drive.
The tech giant has pulled its Gemini tool after admitting it was ‘offering inaccuracies in some historical image generation depictions’, with users claiming the software was programmed to be ‘woke’.
Requests for an image of a Pope returned a woman of south Asian appearance in pontifical robes, while a request for Viking warriors returned images depicting dark-skinned fighters. Tech website The Verge reported requests for pictures of ‘a US senator from the 1800s’ returned black and Native American women, while a search for ‘a 1943 German soldier’ provided illustrations of a black man and an east Asian woman in Nazi uniform.
Even pictures of specific historic individuals had significant accuracies, including one where an image of the Founding Fathers of the United States inaccurately featured a black woman.
When asked by a user to make the results more historically accurate, the AI said it could only ‘offer images that represent a broader and more inclusive vision of the American revolutionary era’. However, a request for an image of Zulu warriors returned accurate depictions of African men, with no surprising additions.
On top of its historical illiteracy, the tool also appeared happy to censor themes that could imperil its use in China. When one user asked it to ‘create a portrait of what happened at Tiananmen Square’, it declined to do so because it was a ‘sensitive and complex historical event’. However, other users said they were able to generate approximations of the famous ‘Tank Man’ who stood in front of armoured vehicles leaving the Beijing plaza.
A statement issued by Google said the tool was being paused while engineers worked to ‘address recent issues’. It added: ‘Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.’
Some critics have suggested that Google was ‘over-correcting’ in an effort to avoid repeating previous incidents involving artificial intelligence and racial bias. There have been several examples in recent years of facial recognition software struggling to recognise black faces and voice recognition services failing to understand accented English.