Hindustan Times ST (Mumbai)

‘Facebook didn’t block BJP MP’S fake account’

-

Deeksha Bhardwaj

NEW DELHI: Company documents and communicat­ions shared by a Meta (formerly Facebook) whistleblo­wer to a parliament­ary committee names BJP MP Vinod Sonkar as having run a network of fake accounts that Facebook did not act on despite being flagged for takedown, according to copies seen by HT.

The decision to leave the network up, in the run up to Delhi elections in 2020, adds to a list of seemingly preferenti­al treatment given by the company to some political parties, and Sonkar is the second known politician whose activity on the company’s main social network service was left untouched. Wall Street Journal reported the company did not impose a ban on Telangana BJP leader T Raja despite him violating trust and safety rules through what was classified as hate speech.

“If they had informatio­n about fake accounts then they should have blocked them. My page has been verified by them. Why were they allowing fake accounts,” Sonkar said.

According to a document titled India Fake Accounts, the content moderation teams flagged clusters of political spam accounts run by the Aam Aadmi Party, the Congress and the Bharatiya Janata Party (BJP). The company’s then India public policy director, Shivraj Thukral, approved the takedown of the first two but did not respond on the pro-bjp network.

Thukral did not respond to HT’S request for comment.

The document, with a transcript of logs and conversati­ons under what appeared to be a task management system, showed that the whistleblo­wer Sophie

Zhang noted the network linked to Sonkar could “cause civic harm by false amplificat­ion”. The comments made by the network did not include illegal content per se, the exchange between Facebook staffers showed, and the network was classified as a manual inauthenti­c behaviour group instead of a coordinate­d inauthenti­c behaviour cluster, which is a more serious operation that involves automated bots. But both types of clusters use likes and comments as a way to amplify posts or pages.

“There is a lack of prioritisa­tion in terms of semi-sophistica­ted operations,” (paraphrase­d) Zhang states in her chats. “There is also a gap in implementa­tion of policy when content is not violative of content policies, but violative in terms of behaviour.”

The documents showed that one of the other staffers working on this case flagged if “we’re comfortabl­e acting on those actors” since Sonkar’s account was classified as a “government partner” and “high profile” account by Facebook’s xcheck, a system it uses internally to tag prominent accounts that are exempted from some automated enforcemen­t actions.

The incident underscore­s criticism of the company that it treats violations by different political entities differentl­y, and its content moderation policies and processes lack transparen­cy. According to Zhang, who spoke to HT in an interview, public policy teams at Facebook determine the rules of engagement and how to enforce them. “A point of clarificat­ion is that fake accounts are separate from content moderation, fake accounts (inauthenti­c activity) are based on behaviour,” she said.

“Public policy determines the terms of service and community standards and how are they enforced. When Facebook employees want to take action against something that hasn’t before been actioned, they need to seek approval. While, when teams are operating within a given ambit, they can act accordingl­y.”

“We have not been provided the documents and cannot speak to the specific assertions, but we have stated previously that we fundamenta­lly disagree with Ms. Zhang’s characteri­zation of our priorities and efforts to root out abuse on our platform,” a Meta spokespers­on said.

“We aggressive­ly go after abuse around the world and have specialize­d teams focused on this work. As a result, we’ve already taken down more than 150 networks of coordinate­d inauthenti­c behaviour. Around half of them were domestic networks that operated in countries around the world, including those in India. Combatting coordinate­d inauthenti­c behavior is our priority. We’re also addressing the problems of spam and fake engagement. We investigat­e each issue before taking action or making public claims about them,” this person added, without responding to requests to explain the role of the specific public policy executives in the decisions taken the particular case.

The spokespers­on refuted claims that content moderation decisions are made unilateral­ly. “The decisions around content escalation­s are not made unilateral­ly by any one person, including any one member of the India public policy team; rather, they are inclusive of views from different teams and discipline­s within the company. The process comes with robust checks and balances built in to ensure that the policies are implemente­d as they are intended to be and take into considerat­ion applicable local laws. We strive to apply our policies uniformly without regard to anyone’s political positions or party affiliatio­ns,” the spokespers­on added.

 ?? ?? Sophie Zhang
Sophie Zhang

Newspapers in English

Newspapers from India