‘AI models are good at picking up nudity, but less good for hate speech’

If we raise an issue about hate speech, top civil society organizations from across the world will submit a brief, says Sudhir Krishnaswamy, Member, Facebook Oversight Board
If we raise an issue about hate speech, top civil society organizations from across the world will submit a brief, says Sudhir Krishnaswamy, Member, Facebook Oversight Board
Listen to this article |
NEW DELHI : In just over a year of its existence, Facebook’s Oversight Board has admitted 23 appeals on content moderation decisions and made over 50 policy recommendations to the company. In an interview, Sudhir Krishnaswamy, the only Indian member on the board, and the vice chancellor of National Law School of India University, discussed the board’s working so far, its expansion plans and the need for algorithmic moderation of content. Edited excerpts:
Now that it has been one year, what do you think of Facebook as a platform?
Facebook is a different kind of platform depending on the jurisdiction you are in. For example, say the likes of Myanmar or some countries in West Africa, Facebook is a primary media source. In a jurisdiction like India, it’s mixed—Facebook is big, but other media is also big; what is ostensibly private messaging, but also works as public media is also big.
I think that background and understanding of the media environment in each jurisdiction is important because Facebook plays a different role in each of these jurisdictions.
But I think what you’re asking is what Facebook can be. I suppose its promise is that it allows for a disintermediated community, that communities can form irrespective of geography, class, background, etc. That kind of rapid and large community formation can take place, that is its promise. But as we now know with the entire social media universe, where everyone is both a user and publisher, just the format of that platform allows a range of other issues to crop up. The idea that an organic community is the automatic result of a peer-to-peer social media network has been severely tested. I suppose it’s been tested across all platforms, and Facebook is no exception. The challenge of what one has to do about it has not been resolved so far in any jurisdiction.
We often say Facebook’s own policies are evolving, as are global laws. As the board, are you equipped to make recommendations?
The board is an exceptional body in terms of the kinds of people. We take our work very seriously. If there are questions that we have doubts on, like say, what is the nature of the Amhara-Tigrayan conflict in Ethiopia, we commission an opinion on that. We will consult a security group which is world renowned and expert in an area, get their feedback in a period of 7-12 days and factor that in the opinion. We have no hubris; whoever knows, we will ask them.
We also get public submissions. If we raise an issue about hate speech, top civil society organizations across the world will submit a brief. And those are superbly researched, well argued briefs, saying that you should go in this direction for this matter. So, my sense is that our process is really strong.
All big platforms want to use algorithmic moderation, but issues remain. Is artificial intelligence (AI) a viable solution?
It’s an evolving field. The balance between legal and software court in various areas is being worked out. On content moderation, we find AI models are pretty proficient in dealing with some content. For example, they’re very good at picking up nudity, pornography, banned substances, and arms and ammunitions, but less good for hate speech or incitement, because incitement has subtle use of language. This is where the battle lies. Even in areas of nudity, there are difficult cases—say, featuring female breasts but concerning breast cancer—the algorithms are not able to pick up very well. In some areas the algorithms are off, but it’s being trained and retrained. But in some areas, it’s quite off, and I think this is what frustrates a lot of users.
For instance, hate speech or counter speech, where somebody said something and you say something back, and it’s your post that’s taken down while the original message stays up. These are difficult questions and I think that people are trying to automate more effectively.
Because at the scale of these platforms, there will be an automation layer. There’s a certain misunderstanding of scale when people say why don’t you use humans to do everything. Big platforms have to use automation to a certain extent. How much, how good, these are the relevant questions.
Never miss a story! Stay connected and informed with Mint. Download our App Now!!