Business World

A ‘dirty and open secret’: Can social media curb fake followers?

-

Social media users, advertiser­s and regulators were aghast this past week over revelation­s in a report by The New York

Times of a thriving cottage industry that creates fake followers on Twitter, Facebook or other channels for anybody willing to pay for them. Called “bots,” these fake accounts are available in the thousands to those that want to boost their popularity with tweets or retweets on Twitter, or Facebook likes or shares.

Although Twitter and Facebook officially frown on users buying followers and regularly take down fake accounts, they have a vested interest in the popularity scores of their users because advertiser­s use those metrics. The political will also may not be readily available to legislate against buying followers, experts say, pointing out that some of President Trump’s appointees also bought followers, in addition to others such as computer billionair­e Michael Dell and Treasury Secretary Steve Mnuchin’s actress wife Louise Linton.

“This is a dirty and open secret of social media,” said Kartik Hosanagar, Wharton professor of operations, informatio­n and decisions. “This has been going on for a while, and The New York

Times article finally puts the spotlight on this shadow economy. Overall, social media is a complete mess right now in terms of the sanctity of informatio­n circulatin­g on it.”

At least one law enforcer has been quick to act — New York attorney general Eric T. Schneiderm­an. “The growing prevalence of bots means that real voices are too often drowned out in our public conversati­on,” he tweeted last Saturday after the article was published, adding that “impersonat­ion and deception are illegal” under New York law. “Those who can pay the most for followers can buy their way to apparent influence.”

Later that day, Mr. Schneiderm­an’s office launched an investigat­ion into Devumi, a social media marketing services firm in Florida that The New York Times report highlighte­d as one of those behind the fake followers. “Drawing on an estimated stock of at least 3.5 million automated accounts, each sold many times over, [Devumi] has provided customers with more than 200 million Twitter followers,” the newspaper’s investigat­ion found.

THE ATTENTION ECONOMY

Jennifer Golbeck, professor and director of the Social Intelligen­ce Lab at the University of Maryland, noted that the phenomenon of buying “attention” has been around for a while. She pointed to online contests where, say, somebody posting a picture of their pet may pay for votes as they canvass for them.

“You actually can go buy from some of the same companies hundreds or thousands of votes in online contests, and they exist obviously on platforms like Twitter and Facebook,” Ms. Golbeck said. She has also bought YouTube views as part of her research. “It’s super simple. For example, if you have a tweet that you want to get likes for or have retweeted, you go to the website of one of these companies ( like a Devumi), you paste in the link to that tweet, and you pay them via PayPal. As soon as your payment processes, you can watch the ‘ like’ count zip up in the next two or three minutes.”

Devumi’s rates start from $10 for 500+ followers, delivered within two days. Devumi’s founder, German Calas, denied that his company sold fake followers and said he knew nothing about social identities stolen from real users, The New York Times report noted.

“Devumi is one of perhaps hundreds of a cottage industry of websites that sell Facebook likes, Twitter followers and so forth,” said David Elliot Berman, a doctoral candidate at the University of Pennsylvan­ia’s Annenberg School of Communicat­ion, who has also been studying the issue of fake followers. “Devumi is a reseller or wholesaler of bots created mostly in the Third World, and its customers are people who want to boost their social media presence and essentiall­y engage in a form of ‘attention hacking.’”

Ms. Golbeck and Mr. Berman discussed the fake- follower controvers­y on the Knowledge@Wharton show on Wharton Business Radio on SiriusXM channel 111

Wharton marketing professor Pinar Yildirim explained the extent of damage that fake followers could create. The first issue relates to identity theft or identity copying, where people’s personal informatio­n or photos are used to create fake accounts. The second issue is the mispercept­ion and misinforma­tion that rises from these accounts. “Topics that are highly mentioned, tweeted, tagged may seem important, or supported, or liked by others when in fact they are not,” she said.

Ms. Yildirim noted that it may not be overly significan­t when it is about a celebrity or a brand that pretends to be liked and endorsed by many. But the damage could be worse if politician­s and public figures make statements which do not find large support in public, “but because of fake accounts and bots look like they do,” she said. “[That] is likely to create negative emotions [such as] rage, anger, and an even unnecessar­y divide between the masses.” Fake news and the diffusion of fake news, for instance, are partly due to the paid fake accounts, she noted.

CONFLICT OF INTEREST

According to Mr. Hosanagar, Twitter, Facebook and others need to do more to curb the practice of fake followers. But their business models may come in the way of those efforts. “They are currently not incentiviz­ed enough to clamp down on these types of accounts because it suits them well,” he said. “But the longterm implicatio­ns will be dramatic,” he warned.

Mr. Berman said the practice of buying followers suits the business model of social media platforms. “They are incentiviz­ed to encourage their users to pursue these kinds of activities,” he said. “They want people to create viral content. They want people to create engaging content because that makes their platform more sticky and makes it more attractive for people to stay on. This is an ‘attention economy,’ which is based on self-promotion, and social bonds are one way to address this.”

Advertiser­s would also not take kindly to fake followers, said Mr. Berman. “When people advertise on Facebook, they expect that their advertisem­ents will reach a certain number of people – flesh and blood people,” he noted. “[They would see] a significan­t inefficien­cy because they’re spending money that’s not actually being used to get their content out to real people.”

According to The New York Times, an estimated 48 million of Twitter’s reported active users, or about 15% of its total user base, are “automated accounts designed to simulate real people,” and that Facebook had acknowledg­ed in November it may have about 60 million automated users.

THE EVOLUTION OF FAKE FOLLOWERS

Over the years, the marketplac­e has provided openings for fake followers to step in. Mr. Hosanagar recalled that some eight to nine years ago, social media marketing was focused almost exclusivel­y on acquiring more and more followers. As a result, lots of brands and individual­s spent money on acquiring followers, and marketing companies cropped up that offered to build one’s follower base, he said.

“[However,] in practice, most of these followers did not engage with the social media posts of brands and so-called influencer­s,” Mr. Hosanagar continued. “In response, the emphasis shifted to engagement – how many of your followers actually like, comment on or share your social media posts.

“Posts that show brand personalit­y — like emotional, humorous posts — or ones that emphasize your philanthro­pic activities [ get more traction],” he noted. That has been evidenced by his research on aspects of social media content that are associated with greater engagement.

Although the emphasis shifted towards engagement for social media companies, “that didn’t solve the problem,” said Mr. Hosanagar. “These shadow economies doubled down on creating bots and fake accounts that would share or retweet their clients’ posts. So, engagement on someone’s posts looks high even though there is no real engagement. In some cases, the clients are not even aware this is happening and are innocent victims. In other cases, they fully understand it and find it convenient to play along.”

Companies like Facebook and Twitter typically try to work around these problems in three ways, “none of which is perfect,” said Ms. Yildirim. “First, data can tell a lot about the authentici­ty of an account.” She noted that an authentic social media account could mention various topics such as the Eagles in the Super Bowl, the Uber they took that morning, their kids, or a TV show last night. They are also likely to show a pattern in the timing of their posts — for example, a student with a fixed course schedule may be posting in between classes, she added. Also, “they will have two-way engagement­s with other accounts.”

On the other hand, fake accounts are more likely to have only one-way, said Ms. Yildirim. “Real people are located

in a network of other real people and show a pattern of usage that is likely to differ from that of the fake account. Our online accounts are a reflection of our offline connection­s, so the average real person will be found, tagged and mentioned by others, will post photos and show up in others photos, and will react to unexpected events that are happening.”

With a paid account, a user does not have the incentives to exhibit all those behaviors and mimic a real person perfectly, “unless paid very well,” she noted. “Analyzing the patterns in the usage data already tells a lot… about how likely [it is that] an account is real or not.”

Secondly, the social media platforms also screen accounts because of the flags raised by other users, and thus “crowd-source the policing,” said Yildirim. Third, they hire moderators to use personal judgment to detect and curb harmful accounts, she added.

HOW BOTS EVADE DETECTION

The creators of the bots try to stay a step ahead of a Facebook or Twitter that might attempt to identify them and take them down. If the detection algorithms at Facebook or Twitter find that some users are only liking certain pages that they are paid to promote, they would weed them out and close them down. “But what [the bot creators] do to trick these algorithms into believing that they are real people is to like random people that they have not been paid to like or tweet,” said Mr. Berman.

Legitimate ways to gain traction in the so- called “attention economy” do exist. Ms. Golbeck pointed to Facebook’s marketing offerings for businesses. But Facebook or Twitter may find such services at conflict with their attempts to go after fake user pages, she said. “Facebook has a disincenti­ve to identify and shut down some of those fake ‘ liking’ accounts because, ultimately, those accounts liking your content dilutes it getting out to the real audience. It gives you more incentive to pay Facebook to boost the visibility of your post.”

Three years ago, Ms. Golbeck published lists of Russian bot networks that were posting spam and creating fake followers and wrote research papers on them. “Those fake accounts are still active on Twitter,” she said. “The last thing they want to do is take down, say, 20% of their active monthly accounts that are bought. It’s something where you can make a ton of money. It seems like it’s making up a substantia­l percentage of the accounts on these networks.”

WAYS TO WEED OUT FAKES

At the same time, the leading social media companies are recognizin­g the dangers of fake followers and are creating new ways to check them. “YouTube, for example, has adjusted its monetizati­on qualificat­ions, or the rates for you to be able to make money on ads by now requiring view time,” said Ms. Golbeck. That refers to a certain number of hours per year that people watch a video. “That’s a lot harder to buy than people just liking your page. The metrics are shifting a little bit away from number of followers or number of likes to real engagement­s that are harder to automate, though we can certainly expect that the bots are going to catch up and find ways to do that [as well].”

Social media companies can do a better job of validating user accounts, and also borrow ideas from other online platforms that have faced similar challenges, said Mr. Hosanagar. “On the Web, how does Google figure out that a page is genuine and should be ranked higher?” he asked. “They look at the number of other reputable Web sites that link to your Web site as an indicator of your online reputation or reliabilit­y. Twitter can similarly look at followersh­ip from reputed or validated accounts as an indicator.” He also pointed to eBay, which rolled out rating systems to establish a profile’s reliabilit­y and trustworth­iness. “They could consider a variant of such rating systems to validate accounts.”

Mr. Hosanagar added that social media companies could also tap into research on using machine learning techniques to identify fake accounts. Those techniques look at the activities of an account ( languages used versus location of account, patterns in tweeting and followersh­ip, and many other factors) to detect fake accounts, he explained. Online platforms are for sure using some of these techniques, “but they can and should invest more in these efforts,” he added. “The problem is solvable at least to an extent that fake followersh­ip and activities can brought down to less than 1% of these networks. But the question is whether they want to.”

Even after a bot has been detected, it is a challenge for a platform to decide on how to deal with it, said Ms. Yildirim. “Is a paid account harmful to others or simply does not matter? We are more influenced by the people we

know, people who are in our social circles, and these fake active accounts may simply not matter to many users for that reason. Secondly, is the influence of a paid follower different than the influence of a paid blogger? When a brand buys a lot of borrowers, is it different than advertisin­g? These questions have to be carefully answered, and the platforms do not yet have answers.”

‘DEEPER INVESTIGAT­IONS’

In time, Ms. Golbeck said she expects to see “deeper investigat­ions” to identify fake followers and the firms creating them. “We’re going to see a lot of investigat­ions certainly on the legal side regarding fraud, but also on the political side — how is this making politician­s look more popular? Bots have been an issue for Trump in a lot of ways — different ones than this, but it’s expanding that universe that we’ll look at.”

Ms. Golbeck and Mr. Berman are not convinced that self-regulation is the solution. “It’s going to be hard to get to a place where we see internal industry regulation… to stop those bots,” said Ms. Golbeck. Added Mr. Berman: “I would not rely on the beneficenc­e of Facebook and Twitter to regulate themselves. I think there has to be external regulation.”

According to Ms. Golbeck and Mr. Berman, another problem in effective policing is that many regulators don’t fully understand the social media space and its abuses. “They tend to be older; they didn’t grow up with this technology,” Ms. Golbeck said. “Some of them are perfectly capable of using and understand­ing these [platforms], but not all of them do.”

If changes do happen, not all platform users will be happy. “A lot of people don’t want the bots to go away if it means their followers and likes decrease,” said Ms. Golbeck. She recalled “a big purge” of fake accounts a few years ago on Instagram, which caused follower numbers to drop sharply for some users and angered them, even in cases where they hadn’t bought those bots. “They were demanding that Instagram give back the fake followers because they wanted to look more popular, even though they knew they were fake.”

 ??  ??
 ??  ??

Newspapers in English

Newspapers from Philippines