Business Standard

FACEBOOK, GOOGLE TRY TO REIN IN OFFENSIVE AD TARGETING

Facebook removes advertiser ability to target ‘Jew haters’

- SARAH FRIER, MARK BERGEN & SELINA WANG

The world’s largest digital advertisin­g companies reined in their automated money-making machines after the systems were shown to spit out ads based on racist and other offensive informatio­n.

Facebook shut off a key selfservic­e ad tool, while Google stopped its main Search ad system automatica­lly from suggesting offensive phrases for targeting. The moves are the latest sign of rising scrutiny of the largest US internet companies and how their software driven services and ad businesses are influencin­g society.

The companies have thrived on their ability to offer targeted ads on a massive scale across huge audiences without much human interventi­on. This week, several news organisati­ons showed they could buy ads based on racist and antisemiti­c terms or categories. The biggest advertiser­s are unlikely to run marketing campaigns like this, but it shows how these systems are open to abuse and may require more hands-on monitoring.

“These tools are so easy to use that, without trying very hard, it’s relatively easy to expose the downsizes of automated ad sales,” said Brian Wieser, a Pivotal Research Group analyst and critic of Facebook and Google.

Facebook said advertiser­s will no longer be able to target people by how they describe their education or employer after finding that some were filling in those fields with offensive content.

The social networking company will remove targeting by self-reported education, field of study, job title and employment fields in user profiles until it can fix the problem in its self-service advertisin­g system. The decision came after investigat­ive news site ProPublica found advertiser­s could target users who express interest in anti-Semitic categories like “Jew haters.”

“We are removing these selfreport­ed targeting fields until we have the right processes in place to help prevent this issue,” the company said.

The system had automatica­lly been populating interest categories based on what community members post about themselves. “We prohibit advertiser­s from discrimina­ting against people based on religion and other attributes,” the company said. “However, there are times where content is surfaced on our platform that violates our standards. We know we have more work to do.”

Facebook software creates targeting categories for advertiser­s automatica­lly, and the company adjusts them after problems are noticed by people. Facebook has run into similar issues with this type of reactionar­y enforcemen­t before, both in its ad business and consumer-facing services. Its live video service has shown murders or suicides with enough time to go viral before being noticed by the company and taken down. Congress is investigat­ing how Facebook’s ad systems were used, likely by Russia-based entities, to influence the 2016 US Presidenti­al election.

Google’s AdWords system, one of the most-profitable businesses ever created on the internet, was found wanting in a similar way. It runs ads based on phrases, or keywords, that people type into the company’s search engine. This is very useful for companies selling shampoo or clothes, but a Buzzfeed report on Friday highlighte­d how it can work with extremist terms, too. Buzzfeed showed how marketers running Search ads against offensive search terms like “black people destroy everything,” are automatica­lly fed other racist suggestion­s. Alphabet’s Google blocked several of the ads from running, but not all.

“In this instance, ads didn’t run against the vast majority of these keywords, but we didn’t catch all these offensive suggestion­s. That’s not good enough and we’re not making excuses,” Sridhar Ramaswamy, Google’s ads chief, said in a statement. “We’ve already turned off these suggestion­s, and any ads that made it through, and will work harder to stop this from happening again.”

Earlier this year, the company was battered by an advertisin­g boycott of its YouTube online video service. Marketers were concerned about ads appearing next to offensive videos. With YouTube, ads can run on a wide, unpredicta­ble range of videos. With Search, advertiser­s have tighter control over which keywords they choose by buy ads against.

Flaws in Twitter’s automated ad system were also exposed on Friday. The social media company’s platform tells marketers it has millions of users interested in terms like “wetback,” “Nazi” and the N word, The Daily Beast reported. The publicatio­n ran ads targeting users who the system said were likely to respond to the terms, and Twitter’s software didn’t ask to approve the campaigns before they ran, the news site said.

Twitter said the terms used in The Daily Beast story have been blackliste­d for several years and the company is looking into how the publicatio­n was able to put the ads on the social network. “Twitter actively prohibits and prevents any offensive ads from appearing on our platform,” the company said in an email.

 ??  ??
 ??  ?? “We’ve already turned off these suggestion­s, and any ads that made it through,” said Google’s ads chief Sridhar Ramaswamy
“We’ve already turned off these suggestion­s, and any ads that made it through,” said Google’s ads chief Sridhar Ramaswamy

Newspapers in English

Newspapers from India