International

Facebook’s language gaps weaken screening of hate and terrorism

Internal company documents from the former Facebook product manager-turned-whistleblower Frances Haugen show the problems are far more systemic than just a few innocent mistakes   | Photo Credit: Reuters

As the Gaza war raged and tensions surged across the West Asia last May, Instagram briefly banned the hashtag #AlAqsa, a reference to the Al-Aqsa Mosque in Jerusalem’s Old City, a flash point in the conflict.

Facebook, which owns Instagram, later apologised, explaining its algorithms had mistaken the third-holiest site in Islam for the militant group Al-Aqsa Martyrs Brigade, an armed offshoot of the secular Fatah party.

For many Arabic-speaking users, it was just the latest potent example of how the social media giant muzzles political speech in the region. Arabic is among the most common languages on Facebook’s platforms, and the company issues frequent public apologies after similar botched content removals.

Now, internal company documents from the former Facebook product manager-turned-whistleblower Frances Haugen show the problems are far more systemic than just a few innocent mistakes, and that Facebook has understood the depth of these failings for years while doing little about it.

Such errors are not limited to Arabic. An examination of the files reveals that in some of the world’s most volatile regions, terrorist content and hate speech proliferate because the company remains short on moderators who speak local languages and understand cultural contexts. And its platforms have failed to develop artificial-intelligence solutions that can catch harmful content in different languages.

Not enough moderation

In countries like Afghanistan and Myanmar, these loopholes have allowed inflammatory language to flourish on the platform, while in Syria and the Palestinian territories, Facebook suppresses ordinary speech, imposing blanket bans on common words.

“The root problem is that the platform was never built with the intention it would one day mediate the political speech of everyone in the world,” said Eliza Campbell, director of the Middle East Institute’s Cyber Program. “But for the amount of political importance and resources that Facebook has, moderation is a bafflingly under-resourced project.”

In Myanmar, the company acknowledged in its internal reports that it had failed to stop the spread of hate speech targeting the Rohingya Muslim population.

In India, the documents show Facebook employees debating last March whether it could clamp down on the “fear mongering, anti-Muslim narratives” that a far-right Hindu nationalist group broadcasts on its platform.


Our code of editorial values

Related Topics
  1. Comments will be moderated by The Hindu editorial team.
  2. Comments that are abusive, personal, incendiary or irrelevant cannot be published.
  3. Please write complete sentences. Do not type comments in all capital letters, or in all lower case letters, or using abbreviated text. (example: u cannot substitute for you, d is not 'the', n is not 'and').
  4. We may remove hyperlinks within comments.
  5. Please use a genuine email ID and provide your name, to avoid rejection.

Printable version | Oct 25, 2021 10:40:23 PM | https://www.thehindu.com/news/international/facebooks-language-gaps-weaken-screening-of-hate-and-terrorism/article37167787.ece

Next Story