Internet giants try to rein in offensive ad targeting
The world’s largest digital advertising companies reined in their automated moneymaking machines after the systems were shown to spit out ads based on racist and other offensive information.
Facebook shut off a key selfservice ad tool, while Google stopped its main Search ad system automatically from suggesting offensive phrases for targeting. The moves are the latest sign of rising scrutiny of the largest US Internet companies and how their software-driven services and ad businesses are influencing society.
The companies have thrived on their ability to offer targeted ads on a massive scale across huge audiences without much human intervention. This week, several news organisations showed they could buy ads based on racist and antisemitic terms or categories. The biggest advertisers are unlikely to run marketing campaigns like this, but it shows how these systems are open to abuse and may require more hands-on monitoring.
“These tools are so easy to use that, without trying very hard, it’s relatively easy to expose the downsizes of automated ad sales,” said Brian Wieser, a Pivotal Research Group analyst and critic of Facebook and Google.
Facebook said advertisers will no longer be able to target people by how they describe their education or employer after finding that some were filling in those fields with offensive content. The social networking company will remove targeting by self-reported education, field of study, job title and employment fields in user profiles until it can fix the problem in its self-service advertising system. The decision came after investigative news site ProPublica found advertisers could target users who express interest in anti-Semitic categories like “Jew haters”.
“We are removing these selfreported
these tools are so easy to use that, without trying very hard, it’s relatively easy to expose the downsizes of automated ad sales Brian Wieser, Analyst at Pivotal Research Group
targeting fields until we have the right processes in place to help prevent this issue,” the company said.
The system had automatically been populating interest categories based on what community members post about themselves. “We prohibit advertisers from discriminating against people based on religion and other attributes,” the company said.
“However, there are times where content is surfaced on our platform that violates our standards. We know we have more work to do.”
Facebook software creates targeting categories for advertisers automatically, and the company adjusts them after problems are noticed by people. Facebook has run into similar issues with this type of reactionary enforcement before, both in its ad business and consumer-facing services. Its live video service has shown murders or suicides with enough time to go viral before being noticed by the company and taken down.
Google’s AdWords system, one of the most-profitable businesses ever created on the Internet, was found wanting in a similar way. It runs ads based on phrases, or keywords, that people type into the company’s search engine. This is very useful for companies selling shampoo or clothes, but a Buzzfeed report highlighted how it can work with extremist terms, too.
Buzzfeed showed how marketers running Search ads against offensive search terms like “black people destroy everything,” are automatically fed other racist suggestions. Alphabet’s Google blocked several of the ads from running, but not all. — Bloomberg