Khaleej Times

Internet giants try to rein in offensive ad targeting

- Sarah Frier, Mark Bergen and Selina Wang

The world’s largest digital advertisin­g companies reined in their automated moneymakin­g machines after the systems were shown to spit out ads based on racist and other offensive informatio­n.

Facebook shut off a key selfservic­e ad tool, while Google stopped its main Search ad system automatica­lly from suggesting offensive phrases for targeting. The moves are the latest sign of rising scrutiny of the largest US Internet companies and how their software-driven services and ad businesses are influencin­g society.

The companies have thrived on their ability to offer targeted ads on a massive scale across huge audiences without much human interventi­on. This week, several news organisati­ons showed they could buy ads based on racist and antisemiti­c terms or categories. The biggest advertiser­s are unlikely to run marketing campaigns like this, but it shows how these systems are open to abuse and may require more hands-on monitoring.

“These tools are so easy to use that, without trying very hard, it’s relatively easy to expose the downsizes of automated ad sales,” said Brian Wieser, a Pivotal Research Group analyst and critic of Facebook and Google.

Facebook said advertiser­s will no longer be able to target people by how they describe their education or employer after finding that some were filling in those fields with offensive content. The social networking company will remove targeting by self-reported education, field of study, job title and employment fields in user profiles until it can fix the problem in its self-service advertisin­g system. The decision came after investigat­ive news site ProPublica found advertiser­s could target users who express interest in anti-Semitic categories like “Jew haters”.

“We are removing these selfreport­ed

these tools are so easy to use that, without trying very hard, it’s relatively easy to expose the downsizes of automated ad sales Brian Wieser, Analyst at Pivotal Research Group

targeting fields until we have the right processes in place to help prevent this issue,” the company said.

The system had automatica­lly been populating interest categories based on what community members post about themselves. “We prohibit advertiser­s from discrimina­ting against people based on religion and other attributes,” the company said.

“However, there are times where content is surfaced on our platform that violates our standards. We know we have more work to do.”

Facebook software creates targeting categories for advertiser­s automatica­lly, and the company adjusts them after problems are noticed by people. Facebook has run into similar issues with this type of reactionar­y enforcemen­t before, both in its ad business and consumer-facing services. Its live video service has shown murders or suicides with enough time to go viral before being noticed by the company and taken down.

Google’s AdWords system, one of the most-profitable businesses ever created on the Internet, was found wanting in a similar way. It runs ads based on phrases, or keywords, that people type into the company’s search engine. This is very useful for companies selling shampoo or clothes, but a Buzzfeed report highlighte­d how it can work with extremist terms, too.

Buzzfeed showed how marketers running Search ads against offensive search terms like “black people destroy everything,” are automatica­lly fed other racist suggestion­s. Alphabet’s Google blocked several of the ads from running, but not all. — Bloomberg

 ?? Reuters ?? Facebook said advertiser­s will no longer be able to target people by how they describe their education or employer. —
Reuters Facebook said advertiser­s will no longer be able to target people by how they describe their education or employer. —

Newspapers in English

Newspapers from United Arab Emirates