Photo industry promises image authentication
Some have speculated that unfortunate picture edits could be, in part, the consequence of inherent biases in the AI systems, which perpetuate harmful stereotypes and encourage sexualised content. It is among a host of other ethical concerns surrounding the use of
AI to digitally manipulate pictures, such as copyright infringement, privacy breaches, fake news and the impact on the employment opportunities available to photographers and editors.
However, some of the concerns are more pressing than others. Over the past year, there have been instances of some firms deliberately using AI to create explicit, non-consensual pornographic images. One slew of so-called deepfake photographs that circulated in the town of Almendralejo in southern Spain used images of school-age children taken from their Instagram accounts, before altering them to make it appear as if they were naked. The fake photos were created using ClothOff, an app that has been linked to a similar case in New Jersey. Such horrific and extreme cases may be rare, but they point to how dangerous the ever-improving AI tech can be if wielded by those intent on causing harm.
So what, if anything, can be done? The use of AI is here to stay, even while there are growing calls for greater government regulation of the space. Checking in the meantime, major players in the photography industry are taking steps to bolster public confidence. Sony, Canon and Nikon have all promised that a feature known as image authentication will soon be rolled out across some of their camera ranges, with the firms having agreed upon a global standard for digital signatures, which make it easier to identify how and when a photograph was taken, and by whom.
That innovation, though welcome, will not solve everything, especially at a time when most people use mid-level smartphones to both take pictures and edit them.
Prof Hany Farid is among those who believe the use of such credentialing protocols should be more widespread, and pointed to a metadata-like scheme known as “content credentials” developed by the Coalition for Content Provenance and Authenticity. He described it as the equivalent of a foodstuff label, which can help people understand where an image came from, and how it was created.
“The same technology is already part of Photoshop and other editing programs, allowing edit changes to a file to be logged and inspected,” he told Time.
“All that information pops up when the viewer clicks on the ‘cr’ icon, and in the same clear format and plain language as a nutrition label on the side of a box of cereal.” Significantly, he also said that were the technology fully in use today, photo editors across newsrooms in media outlets around the world could have instantly reviewed the credentials of the royal photograph.
But even so, would that necessarily have settled the debate about whether the image of the Princess of Wales and her children was real or fake, and the point at which editing becomes manipulation? The line between is growing all the more blurred, and not just thanks to the tech – some AI developers are trying to change the rhetoric around its use. Take Google’s Magic Editor, for example, which promises users of the ability to “reimagine” their images.
The focus at the moment may still be on the Princess of Wales and her wayward editing skills, but as the ability to change images becomes easier by the day, she will not be the last public figure to find herself at the centre of a debate over whether we can trust what we see.
Technology allowing edit changes to a file to be logged and inspected is already part of Photoshop and other editing programs