‘Facebook didn’t block BJP MP’S fake account’
Deeksha Bhardwaj
NEW DELHI: Company documents and communications shared by a Meta (formerly Facebook) whistleblower to a parliamentary committee names BJP MP Vinod Sonkar as having run a network of fake accounts that Facebook did not act on despite being flagged for takedown, according to copies seen by HT.
The decision to leave the network up, in the run up to Delhi elections in 2020, adds to a list of seemingly preferential treatment given by the company to some political parties, and Sonkar is the second known politician whose activity on the company’s main social network service was left untouched. Wall Street Journal reported the company did not impose a ban on Telangana BJP leader T Raja despite him violating trust and safety rules through what was classified as hate speech.
“If they had information about fake accounts then they should have blocked them. My page has been verified by them. Why were they allowing fake accounts,” Sonkar said.
According to a document titled India Fake Accounts, the content moderation teams flagged clusters of political spam accounts run by the Aam Aadmi Party, the Congress and the Bharatiya Janata Party (BJP). The company’s then India public policy director, Shivraj Thukral, approved the takedown of the first two but did not respond on the pro-bjp network.
Thukral did not respond to HT’S request for comment.
The document, with a transcript of logs and conversations under what appeared to be a task management system, showed that the whistleblower Sophie
Zhang noted the network linked to Sonkar could “cause civic harm by false amplification”. The comments made by the network did not include illegal content per se, the exchange between Facebook staffers showed, and the network was classified as a manual inauthentic behaviour group instead of a coordinated inauthentic behaviour cluster, which is a more serious operation that involves automated bots. But both types of clusters use likes and comments as a way to amplify posts or pages.
“There is a lack of prioritisation in terms of semi-sophisticated operations,” (paraphrased) Zhang states in her chats. “There is also a gap in implementation of policy when content is not violative of content policies, but violative in terms of behaviour.”
The documents showed that one of the other staffers working on this case flagged if “we’re comfortable acting on those actors” since Sonkar’s account was classified as a “government partner” and “high profile” account by Facebook’s xcheck, a system it uses internally to tag prominent accounts that are exempted from some automated enforcement actions.
The incident underscores criticism of the company that it treats violations by different political entities differently, and its content moderation policies and processes lack transparency. According to Zhang, who spoke to HT in an interview, public policy teams at Facebook determine the rules of engagement and how to enforce them. “A point of clarification is that fake accounts are separate from content moderation, fake accounts (inauthentic activity) are based on behaviour,” she said.
“Public policy determines the terms of service and community standards and how are they enforced. When Facebook employees want to take action against something that hasn’t before been actioned, they need to seek approval. While, when teams are operating within a given ambit, they can act accordingly.”
“We have not been provided the documents and cannot speak to the specific assertions, but we have stated previously that we fundamentally disagree with Ms. Zhang’s characterization of our priorities and efforts to root out abuse on our platform,” a Meta spokesperson said.
“We aggressively go after abuse around the world and have specialized teams focused on this work. As a result, we’ve already taken down more than 150 networks of coordinated inauthentic behaviour. Around half of them were domestic networks that operated in countries around the world, including those in India. Combatting coordinated inauthentic behavior is our priority. We’re also addressing the problems of spam and fake engagement. We investigate each issue before taking action or making public claims about them,” this person added, without responding to requests to explain the role of the specific public policy executives in the decisions taken the particular case.
The spokesperson refuted claims that content moderation decisions are made unilaterally. “The decisions around content escalations are not made unilaterally by any one person, including any one member of the India public policy team; rather, they are inclusive of views from different teams and disciplines within the company. The process comes with robust checks and balances built in to ensure that the policies are implemented as they are intended to be and take into consideration applicable local laws. We strive to apply our policies uniformly without regard to anyone’s political positions or party affiliations,” the spokesperson added.