TWIT­TER FACES THE TRUTH

HOW JACK DORSEY’S FREE-SPEECH PLAT­FORM WAS HI­JACKED BY THE DARK SIDE

Fast Company - - Front Page - BY AUSTIN CARR AND HARRY MCCRACKEN

YAIR ROSEN­BERG WANTED TO TROLL THE TROLLS.

Rosen­berg, a se­nior writer for Jewish fo­cused news-and-cul­ture web­site Tablet Mag­a­zine, had be­come a lead­ing tar­get of anti-semitic Twit­ter users dur­ing his re­port­ing on the 2016 U.S. pres­i­den­tial cam­paign. De­spite be­ing pelted with slurs, he wasn’t overly fix­ated on the Nazis who had em­braced the ser­vice. “For the most part I found them rather laugh­able and eas­ily ig­nored,” he says. But one par­tic­u­lar type of Twit­ter troll did gnaw at him: the ones who posed as mi­nori­ties—us­ing stolen pho­tos of real peo­ple—and then in­fil­trated high-pro­file con­ver­sa­tions to spew venom. “Un­sus­pect­ing read­ers would see this guy who looks like an Ortho­dox Jew or a Mus­lim woman say­ing some­thing ba­si­cally of­fen­sive,” he ex­plains. “So they think, Oh, Mus­lims are re­li­gious. Jews are re­li­gious. And they are hor­rif­i­cally of­fen­sive peo­ple.” Rosen­berg de­cided to fight back. Work­ing with Neal Chan­dra, a San Fran­cisco–based de­vel­oper, he cre­ated an au­to­mated Twit­ter bot called Im­poster Buster. Start­ing in De­cem­ber 2016, it in­serted it­self into the same Twit­ter threads as the hoax ac­counts and po­litely ex­posed the trolls’ mas­quer­ade (“FYI, this ac­count is a racist im­per­son­at­ing a Jew to de­fame Jews”). Im­poster Buster soon came un­der at­tack it­self—by racists who re­ported it to Twit­ter for ha­rass­ment. Un­ex­pect­edly, the com­pany sided with the trolls: It sus­pended the bot for spammy be­hav­ior the fol­low­ing April. With as­sis­tance from the An­tidefama­tion League, Rosen­berg and Chan­dra got that de­ci­sion re­versed three days later. But their tar­gets con­tin­ued to file ha­rass­ment re­ports, and last De­cem­ber Twit­ter once again black­listed Im­poster Buster, this time for good.

Rosen­berg, who con­sid­ers his ef­fort good ci­ti­zen­ship rather than vig­i­lan­tism, still isn’t sure why Twit­ter found it un­ac­cept­able; he never re­ceived an ex­pla­na­tion di­rectly from the com­pany. But the rul­ing gave racists a win by tech­ni­cal knock­out.

For all the ways in which the Im­poster Buster saga is unique, it’s also symp­to­matic of larger is­sues that have long be­dev­iled Twit­ter: abuse, the weaponiz­ing of anonymity, bot wars, and slow-mo­tion de­ci­sion mak­ing by the peo­ple run­ning a real-time plat­form. These prob­lems have only in­ten­si­fied since Don­ald Trump be­came pres­i­dent and chose Twit­ter as his pri­mary mouth­piece. The plat­form is now the world’s prin­ci­pal venue for pol­i­tics and out­rage, cul­ture and con­ver­sa­tion—the home for both #MAGA and #Metoo.

This sta­tus has helped im­prove the com­pany’s for­tunes. Daily us­age is up a healthy 12% year over year, and Twit­ter re­ported its first-ever quar­terly profit in Fe­bru­ary, cap­ping a 12-month pe­riod dur­ing which its stock dou­bled. Although the com­pany still seems un­likely ever to match Face­book’s scale and prof­itabil­ity, it’s not in dan­ger of fail­ing. The oc­ca­sional cries from fi­nan­cial an­a­lysts for CEO Jack Dorsey to sell Twit­ter or from crit­ics for him to shut it down look more and more out of step.

De­spite Twit­ter’s more com­fort­able stand­ing, Dorsey has been in­creas­ingly vo­cal about his ser­vice’s prob­lems. “We are com­mit­ted to mak­ing Twit­ter safer,” the com­pany pledged in its Fe­bru­ary share­holder let­ter. On the ac­com­pa­ny­ing in­vestor call, Dorsey out­lined an “in­for­ma­tion qual­ity” ini­tia­tive to im­prove con­tent and ac­counts on the ser­vice. Monthly ac­tive users have stalled at 330 mil­lion—a fact that the com­pany at­tributes in part to its on­go­ing prun­ing of spam­mers. Twit­ter’s cleanup ef­forts are an ad­mis­sion, al­beit an im­plicit one, that the ar­ray of trou­ble­mak­ers who still roam the plat­form—the hate-mon­gers, fake-news pur­vey­ors, and armies of shady bots de­signed to in­flu­ence pub­lic opin­ion—are im­ped­ing its abil­ity to grow. (Twit­ter did not make Dorsey, or any other ex­ec­u­tive, avail­able

“You can’t take credit for the Arab Spring with­out tak­ing re­spon­si­bil­ity for Don­ald Trump,” says Les­lie Mi­ley, a for­mer en­gi­neer­ing safety man­ager at Twit­ter.

to be in­ter­viewed for this story. Most of the more than 60 sources we spoke to, in­clud­ing 44 for­mer Twit­ter em­ploy­ees, re­quested anonymity.)

Though the com­pany has taken sig­nif­i­cant steps in re­cent years to re­move bad ac­tors, it hasn’t shaken the lin­ger­ing im­pres­sion that it isn’t try­ing hard enough to make the ser­vice a safer space. Twit­ter’s re­sponse to neg­a­tive in­ci­dents is of­ten un­sat­is­fy­ing to its users and more than a tri­fle mys­te­ri­ous—its pun­ish­ment of Rosen­berg, in­stead of his tor­men­tors, be­ing a prime ex­am­ple. “Please can some­one smart make a new web­site where there’s only 140 char­ac­ters and no Nazis?” one user tweeted shortly af­ter Twit­ter in­tro­duced 280-char­ac­ter tweets in Novem­ber.

Twit­ter is not alone in wrestling with the fact that its prod­uct is be­ing cor­rupted for malev­o­lence: Face­book and Google have come un­der height­ened scru­tiny since the pres­i­den­tial elec­tion, as more in­for­ma­tion comes to light re­veal­ing how their plat­forms are be­ing ma­nip­u­lated. The com­pa­nies’ re­sponses have been timid, re­ac­tive, or worse. “All of them are guilty of wait­ing too long to ad­dress the cur­rent prob­lem, and all of them have a long way to go,” says Jonathon Mor­gan, founder of Data for Democ­racy, a team of tech­nol­o­gists who tackle gov­ern­men­tal so­cial-im­pact projects.

The stakes are par­tic­u­larly high for Twit­ter, given the es­sen­tial role it plays in en­abling break­ing news and global dis­course. Its chal­lenges, in­creas­ingly, are the world’s. How did Twit­ter get into this mess? Why is it only now ad­dress­ing the malfea­sance that has dogged the plat­form for years? “Safety got away from Twit­ter,” says a for­mer VP at the com­pany. “It was Pan­dora’s box. Once it’s opened, how do you put it all back in again?”

In Twit­ter’s early days, as the mi­croblog­ging plat­form’s founders were fig­ur­ing out its pur­pose, its users showed them Twit­ter’s power for good. Gal­va­nized by global so­cial move­ments, dis­si­dents, ac­tivists, and whis­tle-blow­ers em­brac­ing Twit­ter, free ex­pres­sion be­came the startup’s guid­ing prin­ci­ple. “Let the tweets flow,” said Alex Macgillivray, Twit­ter’s first gen­eral coun­sel, who later served as deputy CTO in the Obama ad­min­is­tra­tion. In­ter­nally, Twit­ter thought of it­self as “the free-speech wing of the free-speech party.”

This ide­ol­ogy proved naive. “Twit­ter be­came so con­vinced of the virtue of its com­mit­ment to free speech that the lead­er­ship ut­terly mis­un­der­stood how it was be­ing hi­jacked and weaponized,” says a for­mer ex­ec­u­tive.

Twit­ter did not fully ap­pre­ci­ate the novelty of the 2016 at­tack against SNL star Les­lie Jones, which vi­rally spread screen­shots of fake, Pho­to­shopped tweets pur­port­ing to show di­vi­sive things she had shared. This type of dis­in­for­ma­tion cam­paign has been a hall­mark of so­cial me­dia ever since.

The first sign of trou­ble was spam. Child pornog­ra­phy, phish­ing at­tacks, and bots flooded the tweet­stream. Twit­ter, at the time, seemed to be dis­tracted by other chal­lenges. When the com­pany ap­pointed Dick Cos­tolo as CEO in Oc­to­ber 2010, he was try­ing to fix Twit­ter’s un­der­ly­ing in­fra­struc­ture—the com­pany had be­come syn­ony­mous with its “fail whale” server-er­ror page, which ex­em­pli­fied its weak en­gi­neer­ing foun­da­tion. Though Twit­ter was rock­et­ing to­ward 100 mil­lion users dur­ing 2011, its an­ti­spam team in­cluded just four ded­i­cated en­gi­neers. “Spam was in­cred­i­bly em­bar­rass­ing, and they built these stupidly bare-min­i­mum tools to [fight it],” says a for­mer se­nior en­gi­neer.

Twit­ter’s trust and safety group, re­spon­si­ble for safe­guard­ing users, was run by Del Har­vey, Twit­ter em­ployee No. 25. She had an atyp­i­cal ré­sumé for Sil­i­con Val­ley: Har­vey had pre­vi­ously worked with Per­verted Jus­tice, a con­tro­ver­sial vol­un­teer group that used web chat rooms to fer­ret out ap­par­ent sex­ual preda­tors, and part­nered with NBC’S To Catch a Preda­tor, pos­ing as a mi­nor to lure in pe­dophiles for ar­rest on TV. Her lack of tra­di­tional tech­ni­cal and pol­icy ex­pe­ri­ence made her a po­lar­iz­ing fig­ure within the or­ga­ni­za­tion, although al­lies have found her pas­sion about safety is­sues in­spir­ing. In the early days, “she per­son­ally re­sponded to in­di­vid­ual [af­fected] users—del worked tire­lessly,” says Macgillivray. “[She] took on some of the most com­plex is­sues that Twit­ter faced. We didn’t get ev­ery­thing right, but Del’s lead­er­ship was very of­ten a fac­tor when we did.”

Har­vey’s view, cham­pi­oned by Macgillivray and other ex­ec­u­tives, was that bad speech could ul­ti­mately be de­feated with more speech, a be­lief that echoed Supreme Court Jus­tice Louis Bran­deis’s 1927 land­mark First Amend­ment de­ci­sion that this rem­edy is al­ways prefer­able to “en­forced si­lence.” Har­vey oc­ca­sion­ally used as an ex­am­ple the phrase “Yo bitch,” which bad ac­tors in­tend as in­vec­tive, but oth­ers per­ceive as a sassy hello. Who was Twit­ter to de­cide? The mar­ket­place of ideas would fig­ure it out.

By 2012, spam was mu­tat­ing into de­struc­tive trolling and hate speech. The few en­gi­neers in Har­vey’s group had built some in­ter­nal tools to en­able her team to more quickly re­move il­le­gal con­tent such as child pornog­ra­phy, but they weren’t pre­pared for the pro­lif­er­a­tion of ha­rass­ment on Twit­ter. “Every time you build a wall, some­one is go­ing to build a higher lad­der, and there are al­ways more peo­ple out­side try­ing to fuck you over than there are in­side try­ing to stop them,” says a for­mer plat­form en­gi­neer. That year, Aus­tralian TV per­son­al­ity Char­lotte Daw­son was sub­jected to a rash of vi­cious tweets—e.g., “go hang your­self ”—af­ter she spoke out against on­line abuse. Daw­son at­tempted sui­cide and was hos­pi­tal­ized. The fol­low­ing sum­mer, in the U.K., af­ter ac­tivist Caro­line Cri­ado-perez cam­paigned to get a woman’s im­age fea­tured on the 10-pound note, her Twit­ter feed was del­uged with trolls send­ing her 50 rape threats per hour.

The com­pany re­sponded by cre­at­ing a ded­i­cated but­ton for re­port­ing abuse within tweets, yet trolls only grew stronger on the plat­form. In­ter­nally, Cos­tolo com­plained that the “abuse eco­nomics” were “back­ward.” It took just sec­onds to cre­ate an ac­count to ha­rass some­one, but re­port­ing that abuse re­quired fill­ing out a time-con­sum­ing form. Har­vey’s team, earnest about re­view­ing the con­text of each re­ported tweet but lack­ing a large enough sup­port staff, moved slowly. Mul­ti­ple sources say it wasn’t un­com­mon for her group to take months to re­spond to back­logged abuse tick­ets. User sup­port agents man­u­ally eval­u­ated flagged tweets, but they were so over­whelmed by tick­ets that if banned users ap­pealed a sus­pen­sion, they would some­times sim­ply re­lease the of­fend­ers back onto the plat­form. “They were drown­ing,” says a source who worked closely with Har­vey. “To this day, it’s shock­ing to me how bad Twit­ter was at safety.”

Twit­ter’s lead­er­ship, mean­while, was fo­cused on pre­par­ing for the com­pany’s Novem­ber 2013 IPO, and as a re­sult it de­voted the bulk of its en­gi­neer­ing re­sources to the team over­see­ing user growth, which was key to Twit­ter’s pitch to Wall Street. Har­vey didn’t have the tech­ni­cal sup­port she needed to build scal­able so­lu­tions to Twit­ter’s woes.

Tox­i­c­ity on the plat­form in­ten­si­fied dur­ing this time, es­pe­cially in in­ter­na­tional mar­kets. Trolls or­ga­nized to spread misog­y­nist mes­sages in In­dia and anti-semitic ones in Europe. In Latin Amer­ica, bots be­gan in­fect­ing elec­tions. Hun­dreds used dur­ing Brazil’s 2014 pres­i­den­tial race spread pro­pa­ganda, lead­ing a com­pany ex­ec­u­tive to meet with gov­ern­ment of­fi­cials, dur­ing which, ac­cord­ing to a source, “pretty much every mem­ber of the Brazil­ian house and se­nate asked, ‘What are you do­ing about bots?’ ” (Around this time, Rus­sia re­port­edly be­gan test­ing bots of its own to sway pub­lic opin­ion through dis­in­for­ma­tion.)

It wasn’t un­til mid-2014, around the time that trolls forced co­me­dian Robin Wil­liams’s daugh­ter, Zelda, off the ser­vice in the wake of her fa­ther’s sui­cide, that Cos­tolo had fi­nally had enough. Cos­tolo, who had been the vic­tim of abuse in his own feed, lost faith in Har­vey, mul­ti­ple

143 Num­ber of points the Dow fell on April 23, 2013, af­ter the Syr­ian Elec­tronic Army hacked the AP’S Twit­ter ac­count and spread false ru­mors about a ter­ror at­tack on the White House

sources say. He put a dif­fer­ent depart­ment in charge of re­spond­ing to user-sub­mit­ted abuse tick­ets, though he left Har­vey in charge of set­ting the com­pany’s trust and safety guide­lines.

Soon, the threats mor­phed again: ISIS be­gan to lever­age Twit­ter to rad­i­cal­ize fol­low­ers. Steeped in free-speech val­ues, com­pany ex­ec­u­tives strug­gled to re­spond. Once be­head­ing videos started cir­cu­lat­ing, “there were bru­tal ar­gu­ments with Dick,” re­calls a for­mer top ex­ec­u­tive. “He’d say, ‘You can’t show peo­ple get­ting killed on the plat­form! We should just erase it!’ And [oth­ers would ar­gue], ‘But what about a PHD stu­dent post­ing a pic­ture of the Kennedy as­sas­si­na­tion?’ ” They de­cided to al­low im­agery of be­head­ings, but only un­til the knife touches the neck, and, ac­cord­ing to two sources, the com­pany as­signed sup­port agents to search for and re­port be­head­ing con­tent— so the same team could then re­move it. “It was the stu­pid­est thing in the world,” says the source who worked closely with Har­vey. “[Ex­ec­u­tives] al­ready made the pol­icy de­ci­sion to take down the con­tent, but they didn’t want to build the tools to [proac­tively] en­force the pol­icy.” (Twit­ter has since purged hun­dreds of thou­sands of Isis-re­lated ac­counts, a mus­cu­lar ap­proach that has won the plat­form praise.)

Cos­tolo, frus­trated with the com­pany’s mea­ger ef­forts in tack­ling these prob­lems, sent a com­pany-wide memo in Fe­bru­ary 2015 com­plain­ing that he was “ashamed” by how much Twit­ter “sucked” at deal­ing with abuse. “If I could rewind the clock, I’d get more ag­gres­sive ear­lier,” Cos­tolo tells Fast Com­pany, stress­ing that the “blame” lays on no­body “other than the CEO at the time: me.”

“I of­ten hear peo­ple in Sil­i­con Val­ley talk­ing about fake news and dis­in­for­ma­tion as prob­lems we can en­gi­neer our way out of,” says Brendan Ny­han, codi­rec­tor of Bright Line Watch, a group that mon­i­tors threats to demo­cratic pro­cesses. “That’s wrong. Peo­ple are look­ing for a so­lu­tion that doesn’t ex­ist.”

The Val­ley may be com­ing around to this un­der­stand­ing. Last year, Face­book and Youtube (which is owned by Google) an­nounced ini­tia­tives to ex­pand their con­tent-polic­ing teams to 20,000 and 10,000 work­ers, re­spec­tively. Twit­ter, mean­while, had just 3,317 em­ploy­ees across the en­tire com­pany at the end of 2017, a frac­tion of whom are ded­i­cated to im­prov­ing “in­for­ma­tion qual­ity.”

Putting mass quan­ti­ties of hu­man be­ings on the job, though, isn’t a panacea either. It in­tro­duces new is­sues, from per­sonal bi­ases to hav­ing to make com­pli­cated calls on con­tent in a mat­ter of sec­onds. “These re­view­ers use de­tailed rules de­signed to di­rect them to make con­sis­tent de­ci­sions,” says Su­san Be­nesch, fac­ulty as­so­ci­ate at Har­vard’s Berk­man Klein Cen­ter for In­ter­net and So­ci­ety and di­rec­tor of the Dan­ger­ous Speech Project. “That’s a hard thing to do, es­pe­cially at scale.”

The enor­mity of this qual­ity-con­trol co­nun­drum helps ex­plain why Twit­ter fre­quently fails, at least ini­tially, to re­move tweets that users re­port for ha­rass­ment—some in­clud­ing al­lu­sions to death or rape—even though they would ap­pear to vi­o­late its com­mu­nity stan­dards. The com­pany also catches flak for tak­ing ac­tion against tweets that do of­fend these rules but have an ex­tra­or­di­nary con­text, as when it tem­po­rar­ily sus­pended ac­tress Rose Mcgowan for in­clud­ing a pri­vate phone num­ber in a flurry of tweets ex­co­ri­at­ing Hol­ly­wood no­ta­bles in the wake of the Har­vey We­in­stein sex­ual ha­rass­ment scan­dal. “You end up go­ing down a slip­pery slope on a lot of these things,” says a for­mer C-level Twit­ter ex­ec­u­tive. “‘Oh, the sim­ple so­lu­tion is X!’ That’s why you hear now, ‘Why don’t you just get rid of bots?!’ Well, lots of [le­git­i­mate me­dia] use au­to­mated [ac­counts] to post head­lines. Lots of these easy so­lu­tions are a lot more com­plex.”

Five months af­ter Cos­tolo’s Fe­bru­ary 2015 lament, he re­signed from Twit­ter. Cofounder Jack Dorsey, who had run the com­pany un­til he was fired in 2008, re­placed Cos­tolo as CEO (while re­tain­ing the same job at his pay­ments com­pany, Square). Dorsey, an English ma­jor in a land of com­puter sci­en­tists, had deep thoughts about Twit­ter’s fu­ture, but he couldn’t al­ways ar­tic­u­late them in a way that trans­lated to en­gi­neers. “I’d be shocked if you found some­body [to whom] Jack gave an ex­tremely clear ar­tic­u­la­tion of his th­e­sis for Twit­ter,” says the for­mer top ex­ec­u­tive, not­ing that Dorsey has de­scribed the ser­vice by us­ing such metaphors as the Golden Gate Bridge and an elec­tri­cal out­let for a toaster. Once, he gath­ered the San Fran­cisco of­fice for a meet­ing where he told em­ploy­ees he wanted to de­fine Twit­ter’s mis­sion—and pro­ceeded to play the Bea­tles’s “Black­bird” as at­ten­dees lis­tened in con­fused si­lence.

There was no doubt, though, that he be­lieved in Twit­ter’s defin­ing ethos. “Twit­ter stands for free­dom of ex­pres­sion. We stand for speak­ing truth to power,” Dorsey tweeted on his first of­fi­cial day back as Twit­ter’s CEO, in Oc­to­ber 2015.

$6B Peak val­u­a­tion of Cynk, a fake com­pany with no as­sets and no rev­enue, af­ter bots fu­eled a pump-and-dump stock scheme over two months in mid-2014

Hun­dreds of bots were used in Brazil’s 2014 pres­i­den­tial elec­tion to spread po­lit­i­cal pro­pa­ganda on Twit­ter, lead­ing a com­pany ex­ec­u­tive to visit the coun­try and meet with mem­bers of its Na­tional Congress.

By the time Dorsey’s ten­ure got un­der way, Twit­ter had got­ten a bet­ter han­dle on some of the ver­bal pol­lu­tion plagu­ing the ser­vice. The com­pany’s anti-abuse op­er­a­tions had been taken over by Tina Bhat­na­gar, a no-non­sense veteran of Sales­force who had lit­tle pa­tience for free-speech hand-wring­ing. Bhat­na­gar dra­mat­i­cally in­creased the num­ber of out­sourced sup­port agents work­ing for the com­pany and was able to re­duce the av­er­age re­sponse time on abuse-re­port tick­ets to just hours, though some felt the process be­came too much of a numbers game. “She was more like, ‘Just fuck­ing sus­pend them,’ ” says a source who worked closely with her. If much of the com­pany was guided by Jus­tice Bran­deis’s words, Bhat­na­gar rep­re­sented Jus­tice Pot­ter Ste­wart’s fa­mous quote about ob­scen­ity: “I know it when I see it.”

This ide­o­log­i­cal split was re­flected in the com­pany’s or­ga­ni­za­tional hi­er­ar­chy, which kept Har­vey and Bhat­na­gar in sep­a­rate parts of the com­pany—le­gal and en­gi­neer­ing, re­spec­tively—with sep­a­rate man­agers. “They of­ten worked on the ex­act same things but with very dif­fer­ent ap­proaches—it was just bonkers,” says a for­mer high-level em­ployee who felt ric­o­cheted be­tween the two fac­tions. Even those seem­ingly on the same team didn’t al­ways see eye to eye: Ac­cord­ing to three sources, Colin Crow­ell, Twit­ter’s VP of pub­lic pol­icy, at one point re­fused to re­port to Har­vey’s boss, gen­eral coun­sel Vi­jaya Gadde (Macgillivray’s suc­ces­sor), due in part to dis­agree­ments about how best to ap­proach free-speech is­sues.

Con­tentious­ness grew com­mon: Bhat­na­gar’s team would want to sus­pend users it found abu­sive, only to be over­ruled by Gadde and Har­vey. “That drove Tina crazy,” says a source fa­mil­iar with the dy­namic. “She’d go look­ing for Jack, but Jack would be at Square, so the next day he’d lis­ten and take notes on his phone and say, ‘Let me think about it.’ Jack couldn’t make a de­ci­sion with­out either up­set­ting the free-speech peo­ple or the on­line-safety peo­ple, so things were never re­solved.”

Dorsey’s sup­port­ers ar­gue that he wasn’t nec­es­sar­ily in­de­ci­sive—there were sim­ply no easy an­swers. Dis­putes that bub­bled up to Dorsey were of­ten bizarre edge cases, which meant that any de­ci­sion he made would be hard to gen­er­al­ize to a wide range of in­stances. “You can have a per­fectly writ­ten rule, but if it’s im­pos­si­ble to ap­ply to 330 mil­lion users, it’s as good as hav­ing noth­ing,” says a source fa­mil­iar with the com­pany’s chal­lenges.

Dorsey had other busi­ness de­mands to at­tend to at the time. When he re­turned

as CEO, user growth had stalled, the stock had de­clined nearly 70% since its high fol­low­ing the IPO, the com­pany was on track to lose more than $500 mil­lion in 2015 alone, and a num­ber of highly re­garded em­ploy­ees were about to leave. Although Twit­ter made some progress in re­leas­ing new prod­ucts, in­clud­ing Mo­ments and its live-video fea­tures, it strug­gled to re­fresh its core ex­pe­ri­ence. In Jan­uary 2016, Dorsey teased users with an ex­pan­sion of Twit­ter’s long-stand­ing 140-char­ac­ter limit, but it took an­other 22 months to launch 280-char­ac­ter tweets. “Twit­ter was a hot mess,” says Les­lie Mi­ley, who man­aged the en­gi­neer­ing group re­spon­si­ble for safety fea­tures un­til he was laid off in late 2015. “When you switch prod­uct VPS every year, it’s hard to keep a strat­egy in place.”

Then the U.S. pres­i­den­tial elec­tion ar­rived. All of Twit­ter’s warts were about to be mag­ni­fied on the world stage. Twit­ter’s sup­port agents, the ones re­view­ing flagged con­tent and wad­ing through the dark­est muck of so­cial me­dia, wit­nessed the ear­li­est warn­ing signs as Don­ald Trump started sweep­ing the pri­maries. “We saw this rad­i­cal shift,” says one at the time. Dis­crim­i­na­tion seemed more fla­grant, the pro­pa­ganda and bots more ag­gres­sive. Says an­other: “You’d re­move it and it’d come back within min­utes, sup­port­ing Nazis, hat­ing Jews, [memes fea­tur­ing] ovens, and oh, the frog . . . the green frog!” (That would be Pepe, a crudely drawn car­toon that white su­prem­a­cists co-opted.)

A July 2016 troll at­tack on SNL star Les­lie Jones—in­cited by alt-right provo­ca­teur Milo Yiannopou­los—proved to be a sem­i­nal mo­ment for Twit­ter’s an­ti­ha­rass­ment ef­forts. Af­ter Jones was bom­barded with racist and sex­ist tweets, Dorsey met with her per­son­ally to apol­o­gize, and the com­pany banned Yiannopou­los per­ma­nently. It also en­hanced its mut­ing and block­ing fea­tures and in­tro­duced an opt-in tool that al­lows users to fil­ter out what Twit­ter has de­ter­mined to be “lower-qual­ity con­tent.” The idea was that Twit­ter wouldn’t be sup­press­ing free speech—it would merely not be shov­ing un­wanted tweets into its users’ faces.

But these ef­forts weren’t enough to shield users from the nox­ious­ness of the Clin­ton–trump elec­tion cy­cle. Dur­ing the Jones at­tack, screen­shots of fake, Pho­to­shopped tweets pur­port­ing to show di­vi­sive things Jones had shared spread vi­rally across the plat­form. This type of dis­in­for­ma­tion gam­bit would be­come a hall­mark of the 2016 elec­tion and be­yond, and Twit­ter did not ap­pre­ci­ate the strength of this new front in the in­for­ma­tion wars.

“You end up go­ing down a slip­pery slope,” says a for­mer C-level Twit­ter ex­ec­u­tive when asked about why the ser­vice can't fix some of its abuse woes. “‘Oh, the sim­ple so­lu­tion is X!’ Lots of these easy so­lu­tions are a lot more com­plex.”

Of the two pres­i­den­tial cam­paigns, Trump’s bet­ter knew how to take ad­van­tage of the ser­vice to am­plify its can­di­date’s voice. When Twit­ter landed mas­sive ad deals from the Repub­li­can nom­i­nee, left-lean­ing em­ploy­ees com­plained to the sales team that it should stop ac­cept­ing Trump’s “bull­shit money.”

The on­go­ing, un­re­solved dis­putes over what Twit­ter should al­low on its plat­form con­tin­ued to flare into the fall. In Oc­to­ber, the com­pany re­neged on a $5 mil­lion deal with the Trump cam­paign for a cus­tom #Crooked­hillary emoji. “There was vi­cious [in­ter­nal] de­bate and back-chan­nel­ing to Jack,” says a source in­volved. “Jack was con­flicted. At the eleventh hour, he pulled the plug.” Trump al­lies later blasted Twit­ter for its per­ceived po­lit­i­cal bias.

On Novem­ber 8, em­ploy­ees were shocked as the elec­tion re­turns poured in, and the morn­ing af­ter Trump’s vic­tory, Twit­ter’s head­quar­ters were a ghost town. Em­ploy­ees had fi­nally be­gun to take stock of the role their plat­form had played not only in Trump’s rise but in the po­lar­iza­tion and rad­i­cal­iza­tion of dis­course.

“We all had this ‘holy shit’ mo­ment,” says a prod­uct team leader at the time, adding that every­one was ask­ing the same ques­tion: “Did we cre­ate this mon­ster?”

In the months fol­low­ing Trump’s win, em­ploy­ees widely ex­pected Dorsey to ad­dress Twit­ter’s role in the elec­tion head-on, but about a dozen sources in­di­cate that the CEO re­mained mostly silent on the mat­ter in­ter­nally. “You can’t take credit for the Arab Spring with­out tak­ing re­spon­si­bil­ity for Don­ald Trump,” says Les­lie Mi­ley, the for­mer safety man­ager.

Over time, though, Dorsey’s think­ing evolved, and he seems to be less am­biva­lent about what he’ll al­low on the plat­form. Sources cite Trump’s con­tro­ver­sial im­mi­gra­tion ban and con­tin­ued alt-right ma­nip­u­la­tion as in­flu­ences. At the same time, Twit­ter be­gan to draw greater scru­tiny from the pub­lic, and the U.S. Congress, for its role in spread­ing dis­in­for­ma­tion.

Dorsey em­pow­ered en­gi­neer­ing lead­ers Ed Ho and David Gasca to go af­ter Twit­ter’s prob­lems full bore, and in Fe­bru­ary 2017, the com­pany rolled out more ag­gres­sive mea­sures to per­ma­nently bar bad ac­tors on the plat­form and bet­ter fil­ter out po­ten­tially abu­sive or low-qual­ity con­tent. “Jack be­came a lit­tle bit ob­sessed,” says a source. “En­gi­neer­ing in every depart­ment was asked to stop work­ing on what­ever they were do­ing and fo­cus on safety.”

Twit­ter’s safety op­er­a­tions, pre­vi­ously siloed, be­came more in­te­grated with the con­sumer-prod­uct side of the com­pany. The re­sults have been pos­i­tive. In May 2017, for ex­am­ple, af­ter learn­ing how much abuse users were be­ing sub­jected to via Twit­ter’s di­rect mes­sages fea­ture, the team over­see­ing the prod­uct came up with the idea of in­tro­duc­ing a sec­ondary in­box to cap­ture bad con­tent, akin to a spam folder. “They’re start­ing to get things right,” says a for­mer man­ager at the com­pany, “ad­dress­ing these prob­lems as a com­bi­na­tion of prod­uct and pol­icy.”

Dur­ing a live video Q&A Dorsey hosted in March, he was asked why trust and safety didn’t work with en­gi­neer­ing much ear­lier. The CEO laughed, then ad­mit­ted, “We had a lot of his­tor­i­cal di­vi­sions within the com­pany where we weren’t as col­lab­o­ra­tive as we could be. We’ve been rec­og­niz­ing where that lack of col­lab­o­ra­tion has hurt us.”

Even pre­vi­ous vic­tims of Twit­ter abuse have rec­og­nized that the com­pany’s new safety mea­sures have helped. “I think Twit­ter is do­ing a bet­ter job than they get pub­lic credit for,” says Bri­anna Wu, the de­vel­oper who be­came a prin­ci­pal tar­get of Gamer­gate, the loose-knit col­lec­tive of trolls whose 2014 at­tacks on prom­i­nent women in the gam­ing in­dus­try was a ca­nary in the Twit­ter-ha­rass­ment coal mine. “Most of the death threats I get these days are either sent to me on Face­book or through email, be­cause Twit­ter has been so ef­fec­tive at in­ter­cept­ing them be­fore I can even see them,” she adds, sound­ing sur­pris­ingly cheery.

Twit­ter has also been more proac­tive since the elec­tion in ban­ning ac­counts and re­mov­ing ver­i­fi­ca­tions, par­tic­u­larly of white na­tion­al­ists and alt-right lead­ers such as Richard Spencer. (The blue check mark sig­ni­fy­ing a ver­i­fied user was orig­i­nally de­signed to con­firm iden­tity but has come to be in­ter­preted as an en­dorse­ment.) Ac­cord­ing to three sources, Dorsey him­self has per­son­ally di­rected some of these de­ci­sions.

Twit­ter be­gan rolling out a se­ries of pol­icy and fea­ture changes last Oc­to­ber that pri­or­i­tized ci­vil­ity and truth­ful­ness over free-speech ab­so­lutism. For in­stance, while threat­en­ing mur­der has al­ways been un­ac­cept­able, now even speak­ing of it ap­prov­ingly in any con­text will earn users a

sus­pen­sion. The com­pany has also made it more dif­fi­cult to bulk-tweet mis­in­for­ma­tion.

Such crack­downs haven’t yet elim­i­nated the ser­vice’s fes­ter­ing prob­lems: Af­ter Fe­bru­ary’s mass shoot­ing at a Park­land, Florida, high school, some sur­viv­ing stu­dents be­came tar­gets of ha­rass­ment, and Rus­sia-linked bots re­port­edly spread pro-gun sen­ti­ments and dis­in­for­ma­tion. No­body, though, can ac­cuse Twit­ter of not con­fronting its worst el­e­ments. The pres­sure on Dorsey to keep this mo­men­tum go­ing is com­ing from Wall Street, too: On a re­cent earn­ings call, a Gold­man Sachs an­a­lyst pressed Dorsey about the com­pany’s progress to­ward elim­i­nat­ing bots and en­forc­ing safety poli­cies. “In­for­ma­tion qual­ity,” Dorsey re­sponded, is now Twit­ter’s “core job.”

This past Valen­tine’s Day, Sen­a­tor Mark Warner en­tered his stately cor­ner suite in Washington, D.C.’S Hart Se­nate Of­fice Build­ing, poured him­self a Vi­ta­m­in­wa­ter, and rushed into an ex­pla­na­tion of why Sil­i­con Val­ley needs to be held ac­count­able for its role in the 2016 elec­tion. As the Demo­cratic vice chair­man of the Se­nate In­tel­li­gence Com­mit­tee, Warner is swamped with high­pro­file hear­ings and clas­si­fied brief­ings, but the topic is also per­sonal for the self-de­scribed “tech guy” who made a for­tune in the 1980s in­vest­ing in tele­coms.

Warner is colead­ing the com­mit­tee’s in­ves­ti­ga­tion into Rus­sian elec­tion in­ter­fer­ence, which has in­creas­ingly cen­tered on the grow­ing, un­fet­tered power of tech­nol­ogy giants, whom he be­lieves need to get over their “ar­ro­gance” and fix their plat­forms. “One of the things that re­ally of­fended me was the ini­tial re­ac­tion from the tech com­pa­nies to blow us off,” he be­gan, lean­ing for­ward in his leather chair. “‘Oh no! There’s noth­ing here! Don’t look!’ Only with re­lent­less pres­sure did they start to come clean.”

He saved his harsh­est words for Twit­ter, which he said has dragged its feet far more than Face­book or Google. “All of Twit­ter’s ac­tions were in the wake of Face­book’s,” Warner com­plained in his grav­elly voice, his face red­den­ing. “They’re draft­ing!” The com­pany was the only one to miss the Jan­uary 8 dead­line for pro­vid­ing an­swers to the In­tel­li­gence Com­mit­tee’s in­quiries, and, mak­ing mat­ters worse, Twit­ter dis­closed weeks later that Krem­lin-linked bots man­aged to gen­er­ate more than 450 mil­lion im­pres­sions, sub­stan­tially higher than the com­pany pre­vi­ously re­ported. “There’s been this [ex­cuse of], ‘Oh, well, that’s just Twit­ter.’ That’s not a long-term vi­able an­swer.”

Warner stated that he has had off­line con­ver­sa­tions di­rectly with Face­book CEO Mark Zucker­berg, but never Dorsey. Throw­ing shade, Warner smiled as he sug­gested that the com­pany may not be able to com­mit as many re­sources as Face­book and Google can be­cause it has a “more com­pli­cated, less lu­cra­tive busi­ness model.”

The big ques­tion now is what gov­ern­ment in­ter­ven­tion might look like. Warner sug­gested sev­eral broad pol­icy pre­scrip­tions, in­clud­ing an­titrust and data pri­vacy reg­u­la­tions, but the one with the great­est po­ten­tial ef­fect on Twit­ter and its ri­vals would be to make them li­able for the con­tent on their plat­forms. When asked if the Euro­pean Union, which has been more force­ful in its reg­u­la­tion of the tech­nol­ogy in­dus­try, could serve as a model, the sen­a­tor replied, “[I’m] glad the EU is act­ing. I think they’re bolder than we are.”

If the U.S. gov­ern­ment does start tak­ing a more ac­tivist role in over­see­ing so­cial net­works, it will un­leash some of the same net­tle­some is­sues that Europe is al­ready work­ing through. On Jan­uary 1, for in­stance, Ger­many be­gan en forc­ing a law known as( deep breath) Net­zw­erkd ur ch set­zungsg es etz, or Net­zdg for short. Rather than es­tab­lish new re­stric­tions on hate speech, it man­dates that large so­cial net­works re­move ma­te­rial that vi­o­lates the coun­try’s ex­ist­ing speech laws—which are far more strin­gent than their U.S. equiv­a­lents—within 24 hours of be­ing no­ti­fied of its ex­is­tence. “De­ci­sions that would take months in a reg­u­lar court are now [made] by so­cial me­dia com­pa­nies in just min­utes,” says Mirko Hohmann, a Ber­lin-based project man­ager for the Global Pub­lic Pol­icy In­sti­tute.

In the U.S., rather than wait for fed­eral ac­tion or in­ter­na­tional guid­ance, state law­mak­ers in Mary­land, New York, and Washington are al­ready work­ing to reg­u­late po­lit­i­cal ads on so­cial net­works. As Warner said, the era of Sil­i­con Val­ley self-polic­ing is over.

Whether or not the fed­eral gov­ern­ment steps in, there are many things Twit­ter could still do to pro­tect its plat­form from abuse. One rel­a­tively straight­for­ward mea­sure would be to la­bel au­to­mated ac­counts as such, which wouldn’t hob­ble le­git­i­mate feeds but would make it tougher for Rus­sian bots to pose as heart­land Trump sup­port­ers. The com­pany could do more to dis­cour­age peo­ple from cre­at­ing ob­jec­tion­able con­tent in the first place by mak­ing its rules more vis­i­ble and di­gestible. It could also build trust by em­brac­ing trans­parency as more than a buzz­word, shar­ing with users more about how ex­actly Twit­ter works and col­lab­o­rat­ing with out­side re­searchers.

To­ward this end, and in­spired by re­search con­ducted by non­profit Cor­tico and MIT’S Lab­o­ra­tory for So­cial Ma­chines, the com­pany an­nounced in March that it will at­tempt to mea­sure its own “con­ver­sa­tional health.” It in­vited other or­ga­ni­za­tions to par­tic­i­pate in this process, and Twit­ter says it will re­veal its first part­ners in July.

The ef­fort is in­trigu­ing, but the crowd­sourced ini­tia­tive also sounds eerily sim­i­lar to Twit­ter’s Trust and Safety Coun­cil, whose mis­sion since it was con­vened in Fe­bru­ary 2016 has been for ad­vo­cates, aca­demics, and grass­roots or­ga­ni­za­tions to pro­vide in­put on the com­pany’s safety ap­proach.

Many peo­ple who worked for Twit­ter want not a met­ric but a mea culpa. Ac­cord­ing to one source who has dis­cussed these is­sues with the com­pany’s lead­er­ship, “Their re­sponse to ev­ery­thing was ba­si­cally, ‘Look, we hear you, but you can’t blame Twit­ter for what hap­pened. If it wasn’t us, it would’ve been an­other medium.’ The ex­ec­u­tives didn’t own up to the fact that we are re­spon­si­ble, and that was one of the rea­sons why I quit.”

Even Sen­a­tor Warner be­lieves that be­fore his col­leagues con­sider leg­is­la­tion, the tech com­pa­nies’ CEOS ought to tes­tify be­fore Congress. “I want them all, not just Dorsey. I want Mark and I want [Google co­founders] Sergey [Brin] and Larry [Page],” he said. “Don’t send your lawyers, don’t send the pol­icy guys. They owe the Amer­i­can pub­lic an ex­pla­na­tion.”

When Twit­ter de­buted its new health met­rics ini­tia­tive, the Amer­i­can pub­lic seemed to fi­nally get one, af­ter Dorsey tweeted about Twit­ter, “We didn’t fully pre­dict or un­der­stand the real-world neg­a­tive con­se­quences. We ac­knowl­edge that now.” He con­tin­ued: “We aren’t proud of how peo­ple have taken ad­van­tage of our ser­vice, or our in­abil­ity to ad­dress it fast enough . . . . We’ve fo­cused most of our ef­forts on re­mov­ing con­tent against our terms, in­stead of build­ing a sys­temic frame­work to help en­cour­age more healthy de­bate, con­ver­sa­tions, and crit­i­cal think­ing. This is the ap­proach we now need.”

One week later, Dorsey con­tin­ued to ac­knowl­edge past mis­steps dur­ing a 47-minute live video broad­cast on Twit­ter. “We will make mis­takes—i will cer­tainly make mis­takes,” he said. “I have done so in the past around this en­tire topic of safety, abuse, mis­in­for­ma­tion, [and] ma­nip­u­la­tion on the plat­form.”

The point of the live stream was to talk more about mea­sur­ing dis­course, and Dorsey tried to an­swer user-sub­mit­ted ques­tions. But the hun­dreds of real-time com­ments scrolling by on the screen il­lus­trated the immense chal­lenge ahead. As the video con­tin­ued, his feed filled with anti-semitic and ho­mo­pho­bic in­sults, caus­tic com­plaints from users who fear Twit­ter is si­lenc­ing their be­liefs, and plain­tive cries for the com­pany to stop racism. Stroking his beard, Dorsey squinted at his phone, watch­ing the bad speech flow as he searched for the good.

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.