How Did Social Media Platforms Fail to Erase 90% of Islamophobic Content?

Adham Hamed | 2 years ago

12

Print

Share

According to new research released by the Center for Countering Digital Hate (CCDH), social media giants Facebook, Twitter, Instagram, YouTube, and TikTok "fail" to take action on most anti-Muslim messages.

According to the international non-profit, which has offices in Washington, D.C., and London, the five social media giants combined did not reply to around 89 percent of anti-Muslim messages recorded between February 15 and March 9.

The "Failure to Protect" report from the CCDH comes amid continuing arguments over whether and how social media platforms should be controlled in the future. To this point, the platforms have mostly been self-regulated, but US politicians are increasingly pressing for reforms.

 

No Protection

The report starts its findings by mentioning Meta, Twitter, and Google's support for the Christchurch Call in the aftermath of the 2019 mass murders at mosques in Christchurch, New Zealand.

The live-streamed shootings spurred the formation of the Christchurch Call, which strives to eradicate internet content showcasing terrorism and violent extremism.

According to the CCDH, the latest analysis "discovers that five major social media platforms regularly failed to combat anti-Muslim racism—even after offensive messages were reported to moderators."

It claimed to have discovered 530 postings across five platforms that "contain unsettling, prejudiced, and degrading information that targets Muslim people through racial caricatures, conspiracies, and false claims." The postings were "at least 25 million times watched."

According to the group, it reported all posts to the social media firms using their "own reporting methods." "Many" of the posts were "clearly recognizable" as "abusive content," according to the CCDH, yet "there was still inactivity."

Following the identification of the posters, the CCDH stated that its auditors "examined every post and recorded any action taken" by the firms.

Seven (or around 5.6 percent) of the 125 posts submitted to Facebook by the CCDH were deleted. According to the CCDH, three of the 105 postings submitted to Twitter were deleted, accounting for around 2.9 percent.

According to the CCDH, Instagram and TikTok took greater action against material and connected profiles. The response rate for the 227 postings reported to Instagram was 14.1%, with 12 posts deleted and 20 accounts deactivated. Twelve of the fifty postings reported to TikTok were deleted, and six accounts were deactivated, giving TikTok a 36% response rate.

 

YouTube’s 'Mess'

Meanwhile, the CCDH claims that YouTube took no action on any of the 23 posts it submitted to the business. When contacted for comment, YouTube denied the report's conclusions, claiming in a statement that it has taken action on some of the reported content.

"YouTube's hate speech and harassment standards set clear criteria forbidding content that advocates violence or hatred against individuals or groups based on religion, ethnicity, or other protected characteristics," YouTube told Newsweek.

"Five of the videos detected by CCDH were deleted for breaching our hate speech policy, and eight were age-restricted."

YouTube also directed Newsweek to its community guidelines, which "make it clear that we do not tolerate hate speech or harassment on YouTube," as well as its hate speech policy, which "specifically prohibits content that promotes violence or hatred against individuals or groups based on attributes such as immigration status, nationality, or religion."

YouTube did not say which of the 23 reported postings it deleted or banned, or when it responded to the reported content.

CCDH informed Newsweek that it was unaware that any of the posts it flagged for YouTube had been taken down.

 

Continuing Hate

According to CCDH CEO Imran Ahmed, the changes YouTube took were likely taken after the CCDH's latest audit for its report.

"They were slow to reply," Ahmed remarked. "The important point is that they've acknowledged and responded to the fact that these videos violate their rules. That raises the question of why they didn't do it sooner."

The findings "mirror CCDH's past Failure to Act reports," according to a CCDH article on its website regarding this week's investigation.

The report concludes with many "calls to action," including social media corporations hiring and training content moderators, specific recommendations for removing anti-Muslim pages and hashtags, and a number of legislative tactics.

Sumayyah Waheed, Muslim Advocates’ senior policy counsel, said: “Dangerous anti-Muslim content that clearly violates social media companies’ rules continues to run rampant on platforms—leading to threats, violence, and even genocide against Muslims worldwide.

"This eye-opening report confirms what we’ve experienced for years: social media platforms are failing to take down prohibited anti-Muslim content online that leads to anti-Muslim hate and violence in the real world,” she added.

Rita Jabri Markwell of the Australian Muslim Advocacy Network (AMAN) said that "Three years on from Christchurch, social media companies are full of spin when it comes to fighting the drivers of violence. We are not surprised by these findings but it’s a relief to have our experiences investigated and validated.”

"Across the world, from India to Australia, Europe to North America, anti-Muslim conspiracy theories have been used to stir violence and extreme politics," she noted.