Why Facebook self-regulation reports need country-specific and qualitative data

Why Facebook self-regulation reports need country-specific and qualitative data

The report said that 95 per cent of the detection was proactive compared to 89 per cent earlier. This means that Facebook could take action or remove posts before those were reported by users

Sonal Khetarpal | August 13, 2020 | Updated 19:51 IST
Facebook

Facebook recently released its Community Standards Enforcement Report for the period between April and June 2020. The report emphasises on the increasing role of technology to review and moderate content, especially during COVID-19, as it had to send its content reviewers home.  

The report said that 95 per cent of the detection was proactive compared to 89 per cent earlier. This means that Facebook could take action or remove posts before those were reported by users. Also, it took action on 22.5 million hate speech content in the said period compared to 9.6 million in January-March 2020.  

While there is considerable improvement from the last quarter, technology experts say it is difficult to make an assessment from global figures. Aniruddh Nigam, Research Fellow, Vidhi Centre for Legal Policy says, "Without the release of country-specific data, it becomes difficult for transparency to be effective." Global figures make it difficult to determine whether the increase in action from Facebook is in all geographies or it is limited to the US where it faced backlash from the regulator. "This makes it difficult to make an assessment if Facebook is dealing with misinformation in India as robustly as it is dealing with it in the US," he says.  

Lack of it seems counter-intuitive considering the fact that the social media giant offers super-specific targeting to its advertisers to reach the right audience.  "While it is not clear what goes behind the scenes at Facebook, but it is likely that they have the capability and the data on country wide basis," says Nigam.  

Prasanto K Roy, Tech Evangelist & Policy Expert adds that while they share quantitative data, it would be immensely valuable if qualitative data was also made public. "India has huge concerns around fake news which has had serious consequences in the country. Online harassment and violence against minorities of all kinds (vulnerable communities/gender/religious minority) are other threats. All such issues may not show in the overall maths of things like number of posts where action was taken. In fact, such groups often become victims of take down action," says Roy. Posts from minority groups usually get removed because many more people might get together and report that content.  

This can have serious repercussion on freedom of speech and expression especially during current times as the social media giant is relying a lot more on technology than human moderation.  

Facebook admits that due to the COVID-19 pandemic, human content moderation has been significantly impacted and while some processes have been restored, they've been unable to offer adequate opportunities to users to appeal decisions.

The report said, "With fewer content reviewers, we took action on fewer pieces of content on both Facebook and Instagram for suicide and self-injury, and child nudity and sexual exploitation on Instagram. Despite these decreases, we prioritized and took action on the most harmful content within these categories. Our focus remains on finding and removing this content while increasing reviewer capacity as quickly and as safely as possible."

Akriti Gaur, technology law and policy expert said: "The reliance on technology first solutions has understandably increased and needs to be scrutinised in detail. More information on where human moderation has reduced, especially for subsets of the categories discussed in the report will be useful."

She adds that more information on how filters are defined etc should be shared. "While the quarterly reports throw light on what has been happening with our speech on Facebook, underlying questions on what constitutes as hate speech in different linguistic/culture/regional/religious/community contexts are worth exploring in detail. It is equally importantly to understand that while reliance on automated filtering technologies has increased, there should be more scrutiny on how these filters define and remove hate or unwanted speech."  

One big announcement was Facebook will get an independent third party to audit the numbers published in the Community Standards Enforcement Report. It is a step in the right direction and is more than what Facebook was doing in the past. But, experts are wary of this move.  

"Students do not grade their homework nor do they pick who grades their answers.  With Facebook choosing its own auditor, I am not sure, if it will be really effective in fostering the accountability that should be imposed on an entity as large as Facebook," says Nigam of Vidhi Centre.  

He adds that since concerns relate to hate speech, child pornography or terrorist content, the governments across the world should be auditing the Facebook.  

Given the influence Facebook now has over the world in terms of individual lives and how conversations happen at a social level, they warrant a lot more transparency and accountability. While there is a degree of self- regulation and the report is its proof, their measures often seem to be responsive and reactive.

"This document from Facebook shows how it is addressing the concerns of users and government in terms of hate speech and fake news. It also demonstrates to the governments the work they are doing at self-regulation," says Roy.  

From that perspective, this document can have far reaching consequences. "If they were seen as not doing anything they would be subject to far more aggressive regulations. If they are able to demonstrate that they are doing what is expected, then there are higher chances that the government and policy makers will be less inclined towards aggressive regulation or over regulation," says Roy.  

For now, a lot of ground is left to be covered.