Los Angeles Times

Real struggles with fake photograph­s

Humans are poorly equipped to recognize digitally altered images, study finds.

- KAREN KAPLAN

Humans are poorly equipped to recognize digitally altered images, report says.

Experts estimate that humans take more than 1 trillion photos a year, and that we’re uploading them to Facebook (let alone the rest of the Internet) at a rate of 4,000 per second.

How many of these images have been altered, doctored or outright faked? We’ll probably never know, new research suggests.

Not only is the human vision system poorly equipped to recognize when a photo has been manipulate­d, there may not be much we can do to make it work better, the new study concludes.

Don’t believe it? Take a look at the two photos with this story, both featuring a man holding a fish. See if you can tell which one is the original and which has been changed — and how. (The answer is at the end of the story.)

A team from the University of Warwick in England found pictures on the Internet and altered them in various ways. Sometimes they added something to the scene (was the man really wearing a watch?). Sometimes they took something out (did there used to be shadows in the trees behind him?). In some cases, shadows were altered or shapes of objects were changed. The researcher­s also engaged in good oldfashion­ed airbrushin­g.

Then they showed their pictures to 707 people ages 14 to 82 who volunteere­d to test their ability to spot a fake. Subjects were presented with 10 photos and asked whether they believed each image had been digitally altered. If they answered yes, they were then asked to click on the region of the photo that had been changed. (Half of the photos each person saw were originals and half were altered.)

Volunteers contemplat­ed each photo for just under 44 seconds, on average. When they thought they had spotted a fake, it took an average of 10.5 seconds to identify the place where they thought the photo had been changed.

When it came to detecting fakes, there were only two possible answers: yes or no. That means volunteers guessing randomly would have been right 50% of the time.

But they didn’t seem to guess randomly.

They correctly classified photos as either original or altered 66% of the time, on average. They did a better job spotting originals (72% correctly identified) than the images that had been changed (60% correctly identified).

But the researcher­s were not exactly impressed by the volunteers’ performanc­e.

“Although subjects’ ability to detect manipulate­d images was above chance, it was still far from perfect,” they wrote. “Furthermor­e, even when subjects correctly indicated that a photo had been manipulate­d, they could not necessaril­y locate the manipulati­on.” Indeed, only 45% of the changes were correctly located, on average.

The volunteers were better at noticing that something was amiss when the alteration­s were “physically implausibl­e,” such as when an object appeared to cast a shadow in the wrong direction. But the precise locations of these impossible alteration­s were just as hard to pinpoint as changes that were more subtle.

The researcher­s then repeated their experiment with another 659 volunteers. This time, instead of using pictures that were already online and compressed into the JPEG format, they took (and modified) their own photos and kept them in the higher-resolution PNG format.

In this second experiment, they also asked volunteers to say where a photo had been altered, even when they thought the photo was authentic.

This time, subjects spent an average of nearly 58 seconds deciding whether a photo had been faked, and an average of 11 seconds deciding where the change had been made.

Though they spent more time considerin­g the photos, they did a slightly worse job determinin­g which pictures were originals and which were not.

Overall, they classified them correctly 62% of the time, on average — worse than the 66% in the first experiment but still better than the 50% that random guessing would have produced. This time around, the subjects were better at identifyin­g manipulate­d photos (65% correctly identified) than the originals (58% correctly identified).

The second group of volunteers outperform­ed the first when it came to finding the alteration­s — on average, they got these right 56% of the time.

In 18% of the trials, volunteers correctly said that a photo had been changed, but they weren’t able to say where. On the flip side, in 10% of cases volunteers incorrectl­y said a photo was unaltered but then went on to guess the correct location of the alteration.

Unlike in the first experiment, the volunteers in the second experiment were no better at spotting implausibl­e fakes than plausible ones.

One thing that was constant, however, was that the more an image had been altered, the more likely the subjects were to notice it. This was particular­ly surprising, the researcher­s wrote, since subjects presented with a manipulate­d image never saw the original version of the same picture and couldn’t make a direct comparison.

“People’s ability to detect manipulate­d photos of real-world scenes is extremely limited,” the researcher­s concluded. “Considerin­g the prevalence of manipulate­d images in the media, on social networking sites, and in other domains, our findings warrant concern about the extent to which people may be frequently fooled in their daily lives.”

If that seems a little depressing, just wait — it gets worse.

“Future research might also investigat­e potential ways to improve people’s ability to spot manipulate­d photos,” the team wrote. “However, our findings suggest that this is not going to be a straightfo­rward task. We did not find any strong evidence to suggest there are individual factors that improve people’s ability to detect or locate manipulati­ons.”

The study was published last week in the journal Cognitive Research: Principles and Implicatio­ns.

And if you’re still wondering about the photos with this story, the larger image is the fake. The boat in the background was digitally inserted.

karen.kaplan@latimes.com Twitter @LATkarenka­plan

 ?? Photograph­s by Sophie Nightingal­e Cognitive Research ?? RESEARCHER­S found photos on the Internet and digitally altered them to test how well people could spot a fake. Which of these is real?
Photograph­s by Sophie Nightingal­e Cognitive Research RESEARCHER­S found photos on the Internet and digitally altered them to test how well people could spot a fake. Which of these is real?
 ??  ?? ON AVERAGE, volunteers correctly classified photos as either original or altered 66% of the time, but only 45% of the changes to images were correctly identified.
ON AVERAGE, volunteers correctly classified photos as either original or altered 66% of the time, but only 45% of the changes to images were correctly identified.

Newspapers in English

Newspapers from United States