Impostors thrive on social media
Lax enforcement lets ‘bot’ accounts flourish
When Hilary Mason, a data scientist and entrepreneur, discovered that dozens of automated “bot” accounts had sprung up to impersonate her on Twitter, she immediately set out to stop them.
She filed dozens of complaints with Twitter, repeatedly submitting copies of her driver’s license to prove her identity. She reached out to friends who worked at the company. But days later, many of the fake accounts remained active, even though virtually identical ones had been shut down.
Millions of accounts impersonating real people roam social media platforms, promoting commercial products and celebrities, attacking political candidates and sowing discord. They spread faked images and misinformation about the school shooting last week in Parkland, Fla. They were central to Russian attempts to sway the 2016 presidential election in favor of Donald Trump, according to a federal grand jury indictment Friday. And U.S. intelligence officials believe they will figure in Russian efforts to shape the upcoming midterm elections, too.
Yet social media companies often fail to vigorously enforce their own policies against impersonation, a New York Times examination found, enabling the spread of fake news and propaganda — and allowing a global black market in social identities to thrive on their platforms.
Facebook and Twitter require proof of identity to shut down an impostor account but none to set one up. And even as social media accounts evolve into something akin to virtual passports — for shopping, political activity and even gaining access to government services — technology companies have devised their own rules and standards, with little oversight or regulation from Washington.
“These companies have, in a lot of ways, assigned themselves to be validators of your identity,” said Jillian York, an official at the Electronic Frontier Foundation, which advocates digital privacy protections. “But the vast majority of users have no access to any due process, no access to any kind of customer service — and no means of appealing any kind of decision.”
A Times investigation last month found that many real accounts are copied and turned into automated “bots” sold by companies like Devumi, a firm now based in Colorado that is under investigation by attorneys general in Florida and New York.
Twitter appears to be tracking Devumi’s network of bots. Since The Times investigation was published, dozens of Devumi’s most prominent clients — actors, reality TV stars, authors, business executives and others seeking to buy followers and retweets — have lost more than 3 million followers. Close to 55,000 impostor accounts sold by Devumi have been restricted or suspended.
Both Facebook and Twitter rely in part on their users to report impostors and abuse. But the companies’ enforcement decisions can seem arbitrary. In January, Antonia Caliboso, a social worker in Seattle, discovered an impostor Facebook account. But Facebook representatives repeatedly told her the account did not violate the company’s impersonation policies. Eventually, Caliboso deleted her real account.
Last week, Facebook reversed its position: The account was shut down, according to an email Caliboso received “because it goes against our community standard on identity and privacy.”