Edinburgh Evening News

Photo industry promises image authentica­tion

-

Some have speculated that unfortunat­e picture edits could be, in part, the consequenc­e of inherent biases in the AI systems, which perpetuate harmful stereotype­s and encourage sexualised content. It is among a host of other ethical concerns surroundin­g the use of

AI to digitally manipulate pictures, such as copyright infringeme­nt, privacy breaches, fake news and the impact on the employment opportunit­ies available to photograph­ers and editors.

However, some of the concerns are more pressing than others. Over the past year, there have been instances of some firms deliberate­ly using AI to create explicit, non-consensual pornograph­ic images. One slew of so-called deepfake photograph­s that circulated in the town of Almendrale­jo in southern Spain used images of school-age children taken from their Instagram accounts, before altering them to make it appear as if they were naked. The fake photos were created using ClothOff, an app that has been linked to a similar case in New Jersey. Such horrific and extreme cases may be rare, but they point to how dangerous the ever-improving AI tech can be if wielded by those intent on causing harm.

So what, if anything, can be done? The use of AI is here to stay, even while there are growing calls for greater government regulation of the space. Checking in the meantime, major players in the photograph­y industry are taking steps to bolster public confidence. Sony, Canon and Nikon have all promised that a feature known as image authentica­tion will soon be rolled out across some of their camera ranges, with the firms having agreed upon a global standard for digital signatures, which make it easier to identify how and when a photograph was taken, and by whom.

That innovation, though welcome, will not solve everything, especially at a time when most people use mid-level smartphone­s to both take pictures and edit them.

Prof Hany Farid is among those who believe the use of such credential­ing protocols should be more widespread, and pointed to a metadata-like scheme known as “content credential­s” developed by the Coalition for Content Provenance and Authentici­ty. He described it as the equivalent of a foodstuff label, which can help people understand where an image came from, and how it was created.

“The same technology is already part of Photoshop and other editing programs, allowing edit changes to a file to be logged and inspected,” he told Time.

“All that informatio­n pops up when the viewer clicks on the ‘cr’ icon, and in the same clear format and plain language as a nutrition label on the side of a box of cereal.” Significan­tly, he also said that were the technology fully in use today, photo editors across newsrooms in media outlets around the world could have instantly reviewed the credential­s of the royal photograph.

But even so, would that necessaril­y have settled the debate about whether the image of the Princess of Wales and her children was real or fake, and the point at which editing becomes manipulati­on? The line between is growing all the more blurred, and not just thanks to the tech – some AI developers are trying to change the rhetoric around its use. Take Google’s Magic Editor, for example, which promises users of the ability to “reimagine” their images.

The focus at the moment may still be on the Princess of Wales and her wayward editing skills, but as the ability to change images becomes easier by the day, she will not be the last public figure to find herself at the centre of a debate over whether we can trust what we see.

Technology allowing edit changes to a file to be logged and inspected is already part of Photoshop and other editing programs

 ?? ?? Prof Hany Farid
Prof Hany Farid
 ?? ?? There are ethical concerns over the use of AI to manipulate pictures
There are ethical concerns over the use of AI to manipulate pictures

Newspapers in English

Newspapers from United Kingdom