The Register Citizen (Torrington, CT)

Will more artificial intelligen­ce spook users?

Social media giant struggling with consumer erosion

- TRIBUNE NEWS SERVICE

WASHINGTON — Social media companies have embraced artificial intelligen­ce tools to scrub their platforms of hate speech, terrorist propaganda and other content deemed noxious. But will those tools censor other content? Can a program judge the value of speech?

Facebook founder Mark Zuckerberg told Congress last week that his company is rapidly developing AI tools to “identify certain classes of bad activity proactivel­y and flag it for our team.”

It is one of several moves by Facebook as it struggles with an erosion of consumer trust over its harvesting of user data, its past vulnerabil­ity to targeted political misinforma­tion and the opaqueness of the formulas upon which its news feeds are built.

Some technologi­sts believe that AI tools won’t resolve these issues that Facebook and other social media companies face.

“The problem is that surveillan­ce is Facebook’s business model: surveillan­ce in order to facilitate psychologi­cal manipulati­on,” said Bruce Schneier, a well-known security expert and privacy specialist. “Whether it’s done by people or (artificial intelligen­ce) is in the noise.”

Zuckerberg said his Menlo Park, Calif., company relies on both AI tools and thousands of employees to review content. By the end of the year, he said, some 20,000 Facebook employees will be “working on security and content review.”

The company is developing its AI tools to track down hate speech and fake news on its platform and views the tools as a “scalable way to identify and root out most of this harmful content,” he said, indicating several times over 10 hours of testimony on two days that Facebook algorithms can find objectiona­ble content faster than humans.

“Today, as we sit here, 99 percent of the ISIS and al-Qaida content that we take down on Facebook, our AI systems flag before any human sees it,” Zuckerberg said at a Senate hearing, referring to extremist Islamic groups.

The artificial intelligen­ce systems work in conjunctio­n with a counterter­rorism team of humans that Zuckerberg says numbers 200 employees. “I think we have capacity in 30 languages that we’re working on,” he said.

Other existing AI tools “do a better job of identifyin­g fake accounts that may be trying to interfere in elections or spread misinforma­tion,” he said. After fake accounts placed political informatio­n on Facebook that disrupted the 2016 election, Facebook proactivel­y took down “tens of thousands of fake accounts” before French and German elections in 2017, and Alabama’s special election for a vacant Senate seat last December, he added.

Facebook is far from alone among social media companies harnessing artificial intelligen­ce to assist humans monitoring content.

“AI tools in concert with humans can do better than either can do alone,” said Wendell Wallach, an investigat­or at The Hastings Center, a bioethics research institute in Garrison, New York.

But Wallach noted that many users do not understand artificial intelligen­ce, and Big Tech may face a backlash like food concerns face over geneticall­y modified ingredient­s.

“The leading AI companies, which happen to be the same as the leading digital companies at the moment, understand that there is a GMO-like elephant that could jump out of the AI closet,” Wallach said.

Already, concern is mounting among conservati­ves on Capitol Hill that platforms like Facebook tilt to the political left, whether AI tools or humans are involved in making content decisions.

“You recognize these folks?” Rep. Billy Long, R-Mo., asked Zuckerberg while holding up a photo of two sisters.

“Is that Diamond and Silk?” Zuckerberg asked, referring to two black social media personalit­ies who are fervent supporters of President Donald Trump.

Indeed, it was, Long said, and Facebook had deemed them “unsafe.”

“What is unsafe about two black women supporting President Donald J. Trump?”

Zuckerberg later noted that his Facebook team “made an enforcemen­t error, and we’ve already gotten in touch with them to reverse it.”

Artificial intelligen­ce tools excel at identifyin­g salient informatio­n out of masses of data but struggle to understand context, especially in spoken language, experts said.

“The exact same sentence, depending on the relationsh­ip between two individual­s, could be an expression of hate or an expression of endearment,” said David Danks, an expert on ethics around autonomous systems at Carnegie Mellon University. He cited the use of the “N-word,” which between some people can be a friendly term, but is also widely considered hate speech in other contexts.

Any errors that AI tools make in such linguistic minefields could be interprete­d as censorship or political bias that could further diminish trust in social media companies.

“The general public, I think, is much less trusting of these companies,” Danks said.

Eventually, he said, the algorithms and AI tools of a handful of companies will earn greater public trust, even as consumers do not understand how they operate.

“I don’t understand in many ways how my car works but I still trust it to function in all the ways I need it to,” Danks said.

Just as librarians once used subjective judgment in taking books off the shelves, social media companies also can face criticism that AI tools can overreach.

“Twitter faces this,” said James J. Hughes, executive director of the Institute for Ethics and Emerging Technologi­es, in Boston. “Pinterest and Instagram are always taking down artists’ websites who happen to have naked bodies in them when they think they are porn, when they are not.

“And they are doing that based on artificial intelligen­ce algorithms that flag how much naked flesh is in the picture.”

In his testimony, Zuckerberg said AI tools were increasing­ly adept at “identifyin­g fake accounts that may be trying to interfere in elections or spread misinforma­tion.”

Facebook has admitted that a Russian agency used Facebook to spread misinforma­tion that reached up to 126 million people around the time of the 2016 presidenti­al vote, and that the personal data of 87 million people may have been misused by the firm Cambridge Analytica to target voters in favor of Trump.

Zuckerberg told senators that Facebook’s delay in identifyin­g Russian efforts to interfere in the election is “one of my greatest regrets in running the company” and pledged to do better to combat manipulati­on for this year’s election.

As legislator­s wrestled over whether Facebook and other social media companies need regulation, Zuckerberg repeatedly faced questions over the nature of his company. Is it also a media company because it produces content? A software company? A financial services firm that supports money transfers?

“I consider us to be a technology company, because the primary thing that we do is have engineers who write code and build products and services for other people,” Zuckerberg told a House hearing.

Experts say that answer doesn’t address complex issues around platforms that are increasing­ly more like public utilities.

“The electric company is not allowed to say, we don’t like your political views therefore we are not going to give you electricit­y,’ ” Danks said.

“If somebody is knocked off of Facebook, is that tantamount to the electric company cutting off their electricit­y? Or is it more like the person who is really loud and obnoxious in a bar, and the owner says, ‘You need to leave now’?”

 ?? Jason Alden / Bloomberg ?? Facebook recently took out ads in U.S. and British newspapers apologizin­g for not doing more to prevent customer data leaks.
Jason Alden / Bloomberg Facebook recently took out ads in U.S. and British newspapers apologizin­g for not doing more to prevent customer data leaks.
 ?? Andrew Harrer / Bloomberg ?? Mark Zuckerberg, chief executive officer and founder of Facebook, speaking last week at a House Energy and Commerce Committee hearing in Washington, said Facebook does collect digital informatio­n on consumers who aren’t registered as users,...
Andrew Harrer / Bloomberg Mark Zuckerberg, chief executive officer and founder of Facebook, speaking last week at a House Energy and Commerce Committee hearing in Washington, said Facebook does collect digital informatio­n on consumers who aren’t registered as users,...

Newspapers in English

Newspapers from United States