Facebook and TikTok publish hate speech in Kenya


Terrorist groups in Kenya have tried to use social media to destabilize before the election. (Farah Abdi Warsameh/Associated Press)


Washington Post

July 31, 2022

By Neha Wadekar


NAIROBI — The shooter approaches from behind, raising a pistol to his victim’s head. He pulls the trigger and “pop,” a lifeless body slumps forward. The shot cuts to another execution, and another. The video was posted on Facebook, in a large group of al-Shabab and Islamic State supporters, where different versions were viewed thousands of times before being taken down.


As Facebook and its competitor TikTok grow at breakneck speed in Kenya, and across Africa, researchers say the technology companies are failing to keep pace with a proliferation of terrorist content, hate speech and false information, taking advantage of poor regulatory frameworks to avoid stricter oversight.


“It is a deliberate choice to maximize labor and profit extraction, because they view the societies in the Global South primarily as markets, not as societies,” said Nanjala Nyabola, a Kenyan technology and social researcher.


About 1 in 5 Kenyans use Facebook, which its parent company last year renamed itself Meta, and TikTok has become one of the most downloaded apps in the country. The prevalence of violent and inflammatory content on the platforms poses real risks in this East African nation, as it prepares for a bitterly contested presidential election next month and deals with the threat of terrorism posed by a resurgent al-Shabab.


“Our approach to content moderation in Africa is no different than anywhere else in the world,” Kojo Boakye, Meta’s director of public policy for Africa, the Middle East and Turkey, wrote in an email to The Washington Post. “We prioritize safety on our platforms and have taken aggressive steps to fight misinformation and harmful content.”


Fortune Mgwili-Sibanda, the head of government relations and public policy in sub-Saharan Africa for TikTok, also responded to The Post by email, writing: “We have thousands of people working on safety all around the world, and we’re continuing to expand this function in our African markets in line with the continued growth of our TikTok community on the continent.”


The companies use a two-pronged content moderation strategy: Artificial intelligence (AI) algorithms provide a first line of defense. But Meta has admitted it is challenging to teach AI to recognize hate speech in multiple languages and contexts, and reports show that posts in languages other than English often slip through the cracks.


In June, researchers at the Institute for Strategic Dialogue in London released a report outlining how al-Shabab and the Islamic State use Facebook to spread extremist content, like the execution video. The two-year investigation revealed at least 30 public al-Shabab and Islamic State propaganda pages with nearly 40,000 combined followers. The groups posted videos depicting gruesome assassinations, suicide bombings, attacks on Kenyan military forces and Islamist militant training exercises. Some content had lived on the platform for more than six years.


Reliance on AI was a core problem, said Moustafa Ayad, one of the authors of the report, because bad actors have learned how to game the system. If the terrorists know the AI is looking for the word jihad, Ayad explained, they can “split up J.I.H.A.D with periods in between the letters, so now it is not being read properly by the AI system.”


Ayad said most of the accounts flagged in the report have now been removed, but similar content has since popped up, such as a video posted in July featuring Fuad Mohamed Khalaf, an al-Shabab leader wanted by the U.S. government. It garnered 141,000 views and 1,800 shares before being removed after 10 days.


Terrorist groups can also bypass human moderation, the second line of defense for social media companies, by exploiting language and cultural expertise gaps, the report said. The national languages in Kenya are English and Swahili, but Kenyans speak dozens of other tribal languages, dialects and the local slang known as Sheng.


Meta said it has a 350-person multidisciplinary team, including native Arabic, Somali and Swahili speakers, who monitor and handle terrorist content. Between January and March, the company claims to have removed 15 million pieces of content that violated its terrorism policies but did not say how much terrorist content it believes to still be on the platform.


In January 2019, al-Shabab attacked the DusitD2 complex in Nairobi, killing 21 people. A government investigation later revealed they planned the attack using a Facebook account that remained undetected for six months, according to local media.


During the Kenyan election in 2017, journalists documented how Facebook struggled to rein in the spread of ethnically charged hate speech, an issue researchers say the company is still failing to address. Adding to their worries now is the growing popularity of TikTok, which is also being used to inflame tensions ahead of the presidential vote in August.


In June, the Mozilla Foundation released a report outlining how election disinformation in Kenya has taken root on TikTok. The report examined more than 130 videos from 33 accounts that had been viewed more than 4 million times, finding ethnic-based hate speech as well as manipulated and false content that violated TikTok policies.


One video clip mimicked a detergent commercial in which the narrator told viewers that the “detergent” could eliminate “madoadoa,” including members of the Kamba, Kikuyu, Luhya and Luo tribes. Interpreted literally, “madoadoa” is an i