Houston Chronicle

AI and the job of cleaning up Facebook.

‘It’s never going to go to zero’

- By Cade Metz and Mike Isaac

MENLO PARK, Calif. — Mike Schroepfer, Facebook’s chief technology officer,

was tearing up.

For half an hour, we had been sitting in a conference room at Facebook’s headquarte­rs, surrounded by whiteboard­s covered in blue and red marker, discussing the technical difficulti­es of removing toxic content from the social network. Then we brought up an episode where the challenges had proved insurmount­able: the shootings in Christchur­ch, New Zealand.

In March, a gunman had killed 51 people in two mosques there and live streamed it on Facebook. It took the company roughly an hour to remove the video from its site. By then, the bloody footage had spread across social media.

Schroepfer went quiet. His eyes began to glisten.

“We’re working on this right now,” he said after a minute, trying to remain composed. “It won’t be fixed tomorrow. But I do not want to have this conversati­on again six months from now. We can do a much, much better job of catching this.”

The question is whether that is really true or if Facebook is kidding itself.

For the past three years, the social network has been under scrutiny for the proliferat­ion of false, misleading and inappropri­ate content that people publish on its site. In response, Mark Zuckerberg, Facebook’s chief executive, has invoked a technology that he says will help eliminate the problemati­c posts: artificial intelligen­ce.

Before Congress last year, Zuckerberg testified that Facebook was developing machine-based systems to “identify certain classes of bad activity” and declared that “over a five- to 10-year period, we will have AI tools” that can detect and remove hate speech. He has since repeated these claims with the media, on conference calls with Wall Street and at Facebook’s own events.

Schroepfer — or Schrep, as he is known internally — is the person at Facebook leading the efforts to build the automated tools to sort through and erase the millions of such posts. But the task is Sisyphean, he acknowledg­ed over the course of three interviews recently.

That’s because every time Schroepfer and his more than 150 engineerin­g specialist­s create AI solutions that flag and squelch noxious material, new and dubious posts that the AI systems have never seen before pop up — and are thus not caught. The task is made more difficult because “bad activity” is often in the eye of the beholder and because humans, let alone machines, cannot agree on what that is.

In one interview, Schroepfer acknowledg­ed after some prodding that AI alone could not cure Facebook’s ills. “I do think there’s an endgame here,” he said. But “I don’t think it’s ‘everything’s solved’ and we all pack up and go home.”

The pressure is on, however. This past week, after widespread criticism over the Christchur­ch video, Facebook changed its policies to restrict the use of its livestream­ing service. At a summit in Paris with President Emmanuel Macron of France and Prime Minister Jacinda Ardern of New Zealand on Wednesday, the company signed a pledge to re-examine the tools it uses to identify violent content.

Schroepfer, 44, is in a position he never wanted to be in. For years, his job was to help the social network build a top-flight AI lab, where the brightest minds could tackle technologi­cal challenges like using machines to pick out people’s faces in photos. He and Zuckerberg wanted an AI operation to rival Google’s, which was widely seen as having the deepest stable of AI researcher­s. He recruited Ph.D.s from New York University, the University of London and the Pierre and Marie Curie University in Paris.

But along the way, his role evolved into one of threat removal and toxic content eliminator. Now he and his recruits spend much of their time applying AI to spotting and deleting death threats, videos of suicides, misinforma­tion and outright lies.

“None of us have ever seen anything like this,” said John Lilly, a former chief executive of Mozilla and now a venture capitalist at Greylock Partners, who studied computer science with Schroepfer at Stanford University in the mid-1990s. “There is no one else to ask about how to solve these problems.”

Facebook allowed us to talk to Schroepfer because it wanted to show how AI is catching troublesom­e content and, presumably, because it was interested in humanizing its executives. The chief technology officer often shows his feelings, according to many who know him.

“I don’t think I’m speaking out of turn to say that I’ve seen Schrep cry at work,” said Jocelyn Goldfein, a venture capitalist at Zetta Venture Partners who worked with him at Facebook.

But few could have predicted how Schroepfer would react to our questions. In two of the interviews, he started with an optimistic message that AI could be the solution, before becoming emotional. At one point, he said coming to work had sometimes become a struggle. Each time, he choked up when discussing the scale of the issues that Facebook was confrontin­g and his responsibi­lities in changing them.

“It’s never going to go to zero,” he said of the problemati­c posts.

‘TALKING ENGINEERS OFF THE LEDGE OF QUITTING’

From his earliest days at Facebook, Schroepfer was viewed as a problem solver.

Raised in Delray Beach, Fla., where his parents ran a 1,000-watt AM radio station that played rock ’n’ roll oldies before switching to R&B, Schroepfer moved to California in 1993 to attend Stanford. There, he majored in computer science for his undergradu­ate and graduate degrees, mingling with fellow technologi­sts like Lilly and Adam Nash, who is now a top executive at the file-sharing company

Dropbox.

In 2008, Dustin Moskovitz, a co-founder of Facebook, stepped down as its head of engineerin­g. Enter Schroepfer, who came to the company to take that role. Facebook served about 2 million people at the time, and his mandate was to keep the site up and running as its numbers of users exploded. The job involved managing thousands of engineers and tens of thousands of computer servers across the globe.

“Most of the job was like a bus rolling downhill on fire with four flat tires. Like: How do we keep it going?” Schroepfer said. A big part of his day was “talking engineers off the ledge of quitting” because they were dealing with issues at all hours, he said.

Over the next few years, his team built a range of new technologi­es for running a service so large. (Facebook has more than 2 billion users today.) It rolled out new programmin­g tools to help the company deliver Facebook to laptops and phones more quickly and reliably. It introduced custom server computers in data centers to streamline the operation of the enormous computer network. In the end, Facebook significan­tly reduced service interrupti­ons.

For his efforts, Schroepfer gained more responsibi­lity. In 2013, he was promoted to chief technology officer. His mandate was to home in on new areas of technology that the company should explore, with an eye on the future. As a sign of his role’s importance, he uses a desk beside Zuckerberg’s at Facebook headquarte­rs and sits between the chief executive and Sheryl Sandberg, the chief operating officer.

“He’s a good representa­tion of how a lot of people at the company think and operate,” Zuckerberg said of Schroepfer. “Schrep’s superpower is being able to coach and build teams across very diverse problem areas. I’ve never worked really with anyone else who can do that like him.”

So it was no surprise when Zuckerberg turned to Schroepfer to deal with all the toxicity streaming onto Facebook.

BROCCOLI VS. MARIJUANA

Inside a Facebook conference room on a recent afternoon, Schroepfer pulled up two images on his Apple laptop computer. One was of broccoli, the other of clumped-up buds of marijuana. Everyone in the room stared at the images. Some of us were not quite sure which was which.

Schroepfer had shown the pictures to make a point. Even though some of us were having trouble distinguis­hing between the two, Facebook’s AI systems were now able to pinpoint patterns in thousands of images so that it could recognize marijuana buds on their own. Once the AI flagged the pot images, many of which were attached to Facebook ads that used the photos to sell marijuana over the social network, the company could remove them.

“We can now catch this sort of thing — proactivel­y,” Schroepfer said.

The problem was that the marijuana-vs.-broccoli exercise was a sign not just of progress but also of the limits that Facebook was hitting. Schroepfer’s team has built AI systems that the company uses to identify and remove pot images, nudity and terrorist-related content. But the systems are not catching all of those pictures, as there is always unexpected content, which means millions of nude, marijuanar­elated and terrorist-related posts continue reaching the eyes of Facebook users.

Identifyin­g rogue images is also one of the easier tasks for AI. It is harder to build systems to identify false news stories or hate speech. False news stories can easily be fashioned to appear real. And hate speech is problemati­c because it is so difficult for machines to recognize linguistic nuances.

Delip Rao, head of research at AI Foundation, a nonprofit that explores how artificial intelligen­ce can fight disinforma­tion, described the challenge as “an arms race.” AI is built from what has come before. But so often, there is nothing to learn from. Behavior changes. Attackers create new techniques. By definition, it becomes a game of cat and mouse.

“Sometimes you are ahead of the people causing harm,” Rao said. “Sometimes they are ahead of you.”

On that afternoon, Schroepfer tried to answer our questions about the cat-and-mouse game with data and numbers. He said Facebook now automatica­lly removes 96% of all nudity from the social network. Hate speech was tougher, he said — the company catches 51% of that on the site. (Facebook later said this had risen to 65%.)

Facebook, which can automatica­lly detect and remove problemati­c live video streams, did not identify the New Zealand video in March, Schroepfer said, because it did not really resemble anything uploaded to the social network in the past. The video gave a firstperso­n viewpoint, like a computer game.

In designing systems that identify graphic violence, Facebook typically works backward from existing images — images of people kicking cats, dogs attacking people, cars hitting pedestrian­s, one person swinging a baseball bat at another. But, he said, “none of those look a lot like this video.”

The novelty of that shooting video was why it was so shocking, Schroepfer said. “This is also the reason it did not immediatel­y get flagged,” he said, adding that he had watched the video several times to understand how Facebook could identify the next one.

“I wish I could unsee it,” he said.

 ?? Peter Prato / The New York Times ?? Mike Schroepfer, chief technology officer of Facebook. Facebook has heralded artificial intelligen­ce as a solution to its toxic content problems, but Schroepfer says it won’t solve everything.
Peter Prato / The New York Times Mike Schroepfer, chief technology officer of Facebook. Facebook has heralded artificial intelligen­ce as a solution to its toxic content problems, but Schroepfer says it won’t solve everything.
 ?? Tom Brenner / New York Times ?? Mark Zuckerberg, chief executive of Facebook, testifies to the Senate inside the Hart Hearing Room in Washington, April 10, 2018.
Tom Brenner / New York Times Mark Zuckerberg, chief executive of Facebook, testifies to the Senate inside the Hart Hearing Room in Washington, April 10, 2018.

Newspapers in English

Newspapers from United States