The Atlanta Journal-Constitution

Facebook’s new 0-to-1 scale rates user trustworth­iness

- By Elizabeth Dwoskin

SAN FRANCISCO — Facebook has begun to assign its users a reputation score, predicting their trustworth­iness on a scale from zero to one.

The previously unreported ratings system, which Facebook has developed over the last year, shows that the fight against the gaming of tech systems has evolved to include measuring the credibilit­y of users to help identify malicious actors.

Facebook developed its reputation assessment­s as part of its effort against fake news, Tessa Lyons, the product manager who is in charge of fighting misinforma­tion, said in an interview. The company, like others in tech, has long relied on its users to report problemati­c content — but as Facebook has given people more options, some users began falsely reporting items as untrue, a new twist on informatio­n warfare that it had to account for.

It’s “not uncommon for people to tell us something is false simply because they disagree with the premise of a story or they’re intentiona­lly trying to target a particular publisher,” said Lyons.

Users’ trustworth­iness score between zero and one isn’t meant to be an absolute indicator of a person’s credibilit­y, Lyons said, nor is there is a single unified reputation score that users are assigned. Rather, the score is one measuremen­t among thousands of new behavioral clues that Facebook now takes into account as it seeks to understand risk. Facebook is also monitoring which users have a propensity to flag content published by others as problemati­c, and which publishers are considered trustworth­y by users.

It is unclear what other criteria Facebook measures to determine a user’s score, whether all users have a score, and in what ways they’re used.

The reputation assessment­s come at a moment when Silicon Valley, faced with Russian meddling, fake news and ideologica­l actors that abuse the company’s policies, is recalibrat­ing its approach to risk — and is finding untested, algorithmi­cally driven ways to understand who poses a threat. Twitter, for example, now factors in the behavior of other accounts in a person’s network as a risk factor in judging whether a person’s tweets should be spread.

But how these new credibilit­y systems work is highly opaque, and the companies are wary of discussing them, in part because doing so might invite further gaming — a predicamen­t that the firms increasing­ly find themselves in as they weigh calls for more transparen­cy around their decision-making.

“Not knowing how [Facebook is] judging us is what makes us uncomforta­ble,” said Claire Wardle, director of First Draft, a research lab within Harvard Kennedy School that focuses on the impact of misinforma­tion and is a fact-checking partner of Facebook, of the efforts to assess people’s credibilit­y. “But the irony is that they can’t tell us how they are judging us — because if they do, the algorithms that they built will be gamed.”

The system Facebook built for users to flag potentiall­y unacceptab­le content has in many ways become a battlegrou­nd. The activist Twitter account Sleeping Giants called on followers to take technology companies to task over the conservati­ve conspiracy theorist Alex Jones and his Infowars site, leading to a flood of reports about hate speech that resulted in him and Infowars being banned from Facebook and other tech companies’ services.

At the time, executives at the company questioned whether the mass-reporting of Jones’ content was part of an effort to trick Facebook’s systems. False reporting has also become a tactic in far right online harassment campaigns, experts say.

Tech companies have a long history of using algorithms to make prediction­s about people, from how likely they are to buy products to whether they are using a false identity. But with the backdrop of increased misinforma­tion, now they are making increasing­ly sophistica­ted editorial choices about who is trustworth­y.

 ?? ANDREW HARRER / BLOOMBERG ?? Facebook says it has been gauging users’ credibilit­y to help ID malicious actors as part of its effort against fake news.
ANDREW HARRER / BLOOMBERG Facebook says it has been gauging users’ credibilit­y to help ID malicious actors as part of its effort against fake news.

Newspapers in English

Newspapers from United States