The Washington Post

In Kenya, dangerous content is thriving on social media

Experts say Facebook and Tiktok fail to keep pace with disinforma­tion

- BY NEHA WADEKAR

nairobi — The shooter approaches from behind, raising a pistol to his victim’s head. He pulls the trigger and “pop,” a lifeless body slumps forward. The shot cuts to another execution, and another. The video was posted on Facebook, in a large group of al-shabab and Islamic State supporters, where different versions were viewed thousands of times before being taken down.

As Facebook and its competitor Tiktok grow at breakneck speed in Kenya, and across Africa, researcher­s say the technology companies are failing to keep pace with a proliferat­ion of terrorist content, hate speech and false informatio­n, taking advantage of poor regulatory frameworks to avoid stricter oversight.

“It is a deliberate choice to maximize labor and profit extraction, because they view the societies in the Global South primarily as markets, not as societies,” said Nanjala Nyabola, a Kenyan technology and social researcher.

About 1 in 5 Kenyans use Facebook, the parent company of which last year renamed itself Meta, and Tiktok has become one of the most downloaded apps in the country. The prevalence of violent and inflammato­ry content on the platforms poses real risks in this East African nation, as it prepares for a bitterly contested presidenti­al election next month and deals with the threat of terrorism posed by a resurgent al-shabab.

“Our approach to content moderation in Africa is no different than anywhere else in the world,” Kojo Boakye, director of public policy for Africa, the Middle East and Turkey for Meta, wrote in an email to The Washington Post. “We prioritize safety on our platforms and have taken aggressive steps to fight misinforma­tion and harmful content.”

Fortune Mgwili-sibanda, the head of government relations and public policy in sub-saharan Africa for Tiktok, also responded to The Post by email, writing: “We have thousands of people working on safety all around the world, and we’re continuing to expand this function in our African markets in line with the continued growth of our Tiktok community on the continent.”

The companies use a twopronged content moderation strategy: Artificial intelligen­ce (AI) algorithms provide a first line of defense. But Meta has admitted it is challengin­g to teach AI to recognize hate speech in multiple languages and contexts, and reports show that posts in languages other than English often slip through the cracks.

In June, researcher­s at the Institute for Strategic Dialogue in London released a report outlining how al-shabab and the Islamic State use Facebook to spread extremist content, like the execution video.

The two-year investigat­ion revealed at least 30 public al-shabab and Islamic State propaganda pages with nearly 40,000 combined followers. The groups posted videos depicting gruesome assassinat­ions, suicide bombings, attacks on Kenyan military forces and Islamist militant training exercises. Some content had lived on the platform for more than six years.

Reliance on AI was a core problem, said Moustafa Ayad, one of the authors of the report, because bad actors have learned how to game the system. If the terrorists know the AI is looking for the word jihad, Ayad explained, they can “split up J.I.H.A.D with periods in between the letters, so now it is not being read properly by the AI system.”

Ayad said most of the accounts flagged in the report have now been removed, but similar content has since popped up, such as a video posted in July featuring Fuad Mohamed Khalaf, an alShabab leader wanted by the U.S. government. It garnered 141,000 views and 1,800 shares before being removed after 10 days.

Terrorist groups can also bypass human moderation, the second line of defense for social media companies, by exploiting language and cultural expertise gaps, the report said. The national languages in Kenya are English and Swahili, but Kenyans speak dozens of other tribal languages, dialects and the local slang known as Sheng.

Meta said it has a 350-person multidisci­plinary team, including native Arabic, Somali and Swahili speakers, who monitor and handle terrorist content. Between January and March, the company claims to have removed 15 million pieces of content that violated its terrorism policies but did not say how much terrorist content it believes to still be on the platform.

In January 2019, al-shabab attacked the Dusitd2 complex in Nairobi, killing 21 people. A government investigat­ion later revealed they planned the attack using a Facebook account that remained undetected for six months, according to local media.

During the Kenyan election in 2017, journalist­s documented how Facebook struggled to rein in the spread of ethnically charged hate speech, an issue researcher­s say the company is still failing to address. Adding to their worries now is the growing popularity of Tiktok, which is also being used to inflame tensions ahead of the presidenti­al vote in August.

In June, the Mozilla Foundation released a report outlining how election disinforma­tion in Kenya has taken root on Tiktok. The report examined more than 130 videos from 33 accounts that had been viewed over 4 million times, finding ethnic-based hate speech as well as manipulate­d and false content that violated Tiktok policies.

One video clip mimicked a detergent commercial in which the narrator told viewers that the “detergent” could eliminate “madoadoa,” including members of the Kamba, Kikuyu, Luhya and Luo tribes. Interprete­d literally, “madoadoa” is an innocuous word meaning blemish or spot, but it can also be a coded ethnic slur and a call to violence. The video contained graphic images of post-election clashes from previous years.

After the report was published, Tiktok removed the video and flagged the term “madoadoa,” but the episode showed how the nuances of language can elude human moderators. A Tiktok whistleblo­wer told report author Odanga Madung that she was asked to watch videos in languages she did not speak and determine, from looking at images alone, whether they violated its guidelines.

Tiktok did not directly respond to that allegation when asked for comment by The Post, but the company issued a recent statement about efforts to address problemati­c election content.

Tiktok said it moderates content in more than 60 languages, including Swahili, but declined to give additional details about its moderators in Kenya or the number of languages it monitors. It has also launched a Kenya-specific operations center with experts who detect and remove posts that violate its policies. And in July, it rolled out an user guide containing election and media literacy informatio­n.

“We have a dedicated team working to safeguard Tiktok during the Kenyan elections,” Mgwili-sibanda wrote. “We prohibit and remove election misinforma­tion, promotions of violence and other violations of our policies.”

But researcher­s still worry that violent rhetoric online could lead to real violence. “One will see these lies really turn into very tragic consequenc­es for people attending rallies,” said Irungu Houghton, director of Amnesty Internatio­nal Kenya.

Researcher­s say Tiktok and Meta can get away with lower content moderation standards in Kenya, in part because Kenyan law does not directly hold social media companies responsibl­e for harmful content on their platforms. By contrast, the Facebook Act in Germany fines companies up to $50 million if they do not remove “clearly illegal” content within 24 hours after a user files a complaint.

“This is quite a gray area,” said Mugambi Laibuta, a Kenyan lawyer. “When you’re talking about hate speech, there’s no law in Kenya that states that these sites should enforce content moderation.”

If Meta and Tiktok do not police themselves, experts warn, African government­s will do it for them, possibly in anti-democratic and dangerous ways.

“If the platforms don’t get their act together, they become convenient excuses for authoritar­ians to clamp down on them across the continent” and “a convenient excuse for them to disappear,” Madung said. “And we all need these platforms to survive. We need them to thrive.”

 ?? 2011 Photo by FARAH Abdi Warsameh/associated PRESS ?? Al-shabab members have attempted to use social media to destabiliz­e Kenya before the election this month. The platforms face challenges as they use artificial intelligen­ce to crack down on hate speech.
2011 Photo by FARAH Abdi Warsameh/associated PRESS Al-shabab members have attempted to use social media to destabiliz­e Kenya before the election this month. The platforms face challenges as they use artificial intelligen­ce to crack down on hate speech.

Newspapers in English

Newspapers from United States