Hate Speech in Myanmar Continues to Thrive on Facebook

By SAM McNEIL and VICTORIA MILKO


FILE - Young demonstrators participate in an anti-coup mask strike in Yangon, Myanmar, on April 4, 2021. Years after coming under scrutiny for contributing to ethnic and religious violence in Myanmar, internal documents viewed by The Associated Press show that Facebook continues to have problems detecting and moderating hate speech and misinformation on its platform in the Southeast Asian nation. (AP Photo, File)


Years after coming under scrutiny for contributing to ethnic and religious violence in Myanmar, Facebook still has problems detecting and moderating hate speech and misinformation on its platform in the Southeast Asian nation, internal documents viewed by The Associated Press show.


Three years ago, the company commissioned a report that found Facebook was used to “foment division and incite offline violence” in the country. It pledged to do better and developed several tools and policies to deal with hate speech.


But the breaches have persisted -- and even been exploited by hostile actors -- since the Feb. 1 military takeover this year that resulted in gruesome human rights abuses across the country.


Scrolling through Facebook today, it’s not hard to find posts threatening murder and rape in Myanmar.


One 2 1/2 minute video posted on Oct. 24 of a supporter of the military calling for violence against opposition groups has garnered over 56,000 views.


“So starting from now, we are the god of death for all (of them),” the man says in Burmese while looking into the camera. “Come tomorrow and let’s see if you are real men or gays.”



One account posts the home address of a military defector and a photo of his wife. Another post from Oct. 29 includes a photo of soldiers leading bound and blindfolded men down a dirt path. The Burmese caption reads, “Don’t catch them alive.”


Despite the ongoing issues, Facebook saw its operations in Myanmar as both a model to export around the world and an evolving and caustic case. Documents reviewed by AP show Myanmar became a testing ground for new content moderation technology, with the social media giant trialing ways to automate the detection of hate speech and misinformation with varying levels of success.


Facebook’s internal discussions on Myanmar were revealed in disclosures made to the Securities and Exchange Commission and provided to Congress in redacted form by former Facebook employee-turned-whistleblower Frances Haugen’s legal counsel. The redacted versions received by Congress were obtained by a consortium of news organizations, including The Associated Press.


Facebook has had a shorter but more volatile history in Myanmar than in most countries. After decades of censorship under military rule, Myanmar was connected to the internet in 2000. Shortly afterward, Facebook paired with telecom providers in the country, allowing customers to use the platform without needing to pay for the data, which was still expensive at the time. Use of the platform exploded. For many in Myanmar, Facebook became the internet itself.


Htaike Htaike Aung, a Myanmar internet policy advocate, said it also became “a hotbed for extremism” around 2013, coinciding with religious riots across Myanmar between Buddhists and Muslims. It’s unclear how much, if any, content moderation was happening at the time by people or automation.


Htaike Htaike Aung said she met with Facebook that year and laid out issues, including how local organizations were seeing exponential amounts of hate speech on the platform and how its preventive mechanisms, such as reporting posts, didn’t work in the Myanmar context.


One example she cited was a photo of a pile of bamboo sticks that was posted with a caption reading, “Let us be prepared because there’s going to be a riot that is going to happen within the Muslim community.”


Htaike Htaike Aung said the photo was reported to Facebook, but the company didn’t take it down because it didn’t violate any of the company’s community standards.


“Which is ridiculous because it was actually calling for violence. But Facebook didn’t see it that way,” she said.

Years later, the lack of moderation caught the attention of the international community. In March 2018, United Nations human rights experts investigating attacks against Myanmar’s Muslim Rohingya minority said Facebook had played a role in spreading hate speech.


When asked about Myanmar a month later during a U.S. Senate hearing, CEO Mark Zuckerberg replied that Facebook planned to hire “dozens” of Burmese speakers to moderate content, would work with civil society groups to identify hate figures and develop new technologies to combat hate speech.


“Hate speech is very language specific. It’s hard to do it without people who speak the local language and we need to ramp up our effort there dramatically,” Zuckerberg said.


Information in internal Facebook documents show that while the company did step up efforts to combat hate speech in the country, the tools and strategies to do so never came to full fruition, and individuals within the company repeatedly sounded the alarm. In one document from May 2020, an employee said a hate speech text classifier that was available wasn’t being used or maintained. Another document from a month later said there were “significant gaps” in misinformation detection in Myanmar.