San Francisco Chronicle

Unexpected front in fighting fake news, hate speech online

- By Marissa Lang

The next push for more transparen­cy in how Facebook and Alphabet deal with fake news and hate speech may come not from users or executives, but from investors.

Shareholde­rs will go into the companies’ annual meetings next month with exhaustive research suggesting the corporatio­ns are not doing enough to stem the flow of lies and misinforma­tion, thanks to a report released Tuesday by an organizati­on that advocates for corporate policy changes.

The report, published by the nonprofit Open Mic, recommends that Facebook and Alphabet, parent of the Google search engine and YouTube video site, release annual assessment­s of the impact of propaganda, fake news and hate speech, as well as what they’re doing to address those issues. Several shareholde­rs say those policy changes would be good for business and would help restore public confidence in two of the biggest tech corporatio­ns in the world.

“These companies would

have you believe that (fake news) is kind of a temporary hiccup that they’re experienci­ng,” said Michael Connor, Open Mic’s executive director. “When you dig down and find out what is at play here, there are some serious governance issues and management issues in terms of how these companies have developed technology and launched that technology into the world, and then waited until long after that to deal with the unintended results of it.”

The first test of these recommenda­tions will come in a few weeks, when Facebook investors convene for the company’s annual shareholde­rs meeting on June 1.

Natasha Lamb, managing partner of investment firm Arjuna Capital, will ask stakeholde­rs to vote on a proposal to require that the social networking company issue a report “reviewing the public policy issues associated with fake news enabled by Facebook. The report should review the impact of current fake news flows and management systems on the democratic process, free speech and a cohesive society, as well as reputation­al and operationa­l risks from potential public policy developmen­ts.”

Arjuna created the proposal in conjunctio­n with investment management firm Baldwin Brothers.

Though Lamb is not confident the measure will pass, she hopes it sparks a conversati­on about how Facebook can be a more responsibl­e arbiter of informatio­n.

A similar proposal has been drafted and is expected to be voted on at Alphabet’s annual shareholde­rs meeting on June 7.

Open Mic’s report will bolster shareholde­r proposals like hers, Lamb said.

“Facebook, whether they like it or not, controls the conversati­on,” Lamb said. “They have a responsibi­lity to ensure their platform is not being misused by propaganda, misinforma­tion and hate speech.”

Since the November election, Facebook and Google have been grappling with the issue of fake news — misinforma­tion meant to sow confusion and division for economic and/or political gain — and their roles as traffic cops on the informatio­n superhighw­ay.

First they cut cash flow to fake-news websites. Then they began to offer services meant to help users discern what informatio­n is true and what is not.

But, in an effort to maintain distance from the controvers­y over fake news, the companies have largely outsourced factchecki­ng and turned to other organizati­ons, or users themselves, to police the Web.

“As investors, right now we’re relying on third-party analysis to determine whether or not what Facebook and Google are doing is actually effective,” Lamb said. “We saw that Google banned 200 publishers in 2016. That seems like a drop in the bucket. Did that fix the problem? How pernicious is the problem? How many people do they have on board dealing with this? These are some of the questions we want answered.”

Among the recommenda­tions outlined in Tuesday’s report, Open Mic has called for tech companies to start acting more like media companies, a term that both Facebook and Google have adamantly resisted.

The organizati­on suggests tech companies should hire “ombudspers­ons” or other figures to hold the companies accountabl­e to their users and offer assessment­s of how corporate actions may impact the public interest.

Tuesday’s report also recommends that Facebook and Alphabet publish annual impact assessment­s on fake news, propaganda campaigns and hate speech that “are transparen­t, accountabl­e” and provide an opportunit­y for those affected by any fake-news crackdown to appeal.

Those assessment­s, the Open Mic report said, should “include definition­s of these terms; metrics; the role of algorithms, the extent to which staff or third parties evaluate fabricated content claims; and strategies and policies to appropriat­ely manage the issues without negative impact on free speech.”

Last month Google announced a product it calls Fact Check, which will pair headlines with a notificati­on stating whether the claim has been verified by a reputable news agency or fact-checking organizati­on.

Facebook introduced a similar feature in partnershi­p with fact-checking groups like PolitiFact and FactCheck.org in December. Fact-checking organizati­ons agreed to verify questionab­le claims on Facebook in order to provide users an impartial assessment of accuracy.

“Google and Facebook are data companies — they have billions of bits of data,” Connor said. “So what we’re saying is, show us how effective those efforts are. In the absence of facts, those efforts are just window-dressing.”

Nearly two-thirds of Americans reported feeling confused about basic facts of current events and issues due to madeup news stories, according to a December study from the Pew Research Center.

But fake news doesn’t just damage the reputation­s of big tech companies like Facebook and Google, Lamb said — it’s bad for their business. She cited fines being levied in Germany against companies that allow fake news or offensive content to remain online, investigat­ions in the United Kingdom, and comments from American legislator­s on both sides of the aisle who believe the government has a responsibi­lity to curtail the spread of fake news.

“There’s a public policy risk here,” she said. “There’s also a palpable erosion of trust, which can affect how sticky these companies are.”

Alphabet has been under fire since a March investigat­ion by the Wall Street Journal found that Google had been running advertisem­ents on YouTube videos containing hate speech or espousing white-supremacis­t points of view. Though this is hardly the only instance of objectiona­ble speech on social media, Lamb said, it is the clearest example of how a company not addressing offensive content can impact its bottom line.

YouTube has since lost millions in advertisin­g as companies — including AT&T, Verizon, Pepsi and Johnson & Johnson — pulled their ads from the online video site.

Last month, Google released a report that stated 0.25 percent of its daily traffic returns offensive or “clearly misleading content, which is not what people are looking for.”

Facebook and Google have for months cut off fake-news sites’ access to their advertisin­g tools in an effort to remove incentives for those who publish misleading or false content for profit. Both companies also have a list of other types of websites they ban from their advertisin­g tools, including sites with offensive content such as hate speech and pornograph­y.

But some investors worry that by simply blocking those websites — or removing offensive content as it arises — the companies may be playing a game of Internet Whac-A-Mole without a broader strategy.

“How does Google or Facebook decide what is hate speech? Who makes that call? And when they do — or an algorithm does — what happens? We don’t really know,” Connor said. “What troubles us is (policing offensive content) happens on an ad hoc basis. It’s not clear to us that it’s any part of a well-developed plan.”

Newspapers in English

Newspapers from United States