Implementing Transparency About Content Moderation
from the not-as-easy-as-it-looks dept
On February 2nd, Santa Clara University is hosting a gathering of tech platform companies to discuss how they actually handle content moderation questions. Many of the participants have written short essays about the questions that will be discussed at this event -- and over the next few weeks we'll be publishing many of those essays, including this one.
When people express free speech-based concerns about content removal by platforms, one type of suggestion they generally offer is -- increase transparency. Tell us (on a website or in a report or with an informative "tombstone" left at the URL where the content used to be) details about what content was removed. This could happen lots of different ways, voluntarily or not, by law or industry standard or social norms. The content may come down, but at least we'll have a record and some insight into what happened, at whose request, and why.
In light of public discussions about platform transparency, especially in the past year, this post offers a few practical thoughts about transparency by online UGC platforms. First, some of the challenges platforms face in figuring out how to be transparent with users and the public about their content moderation processes. Second, the industry practice of transparency reports and what might be done to make them as useful as possible.
Content Moderation Processes & Decisions
So, why not be radically transparent and say everything? Especially if you're providing a service used by a substantial chunk of the public and have nothing to hide. Just post all takedown requests in their entirety and all correspondence with people seeking asking you to modify or remove content. The best place to start answering this is by mentioning some of the incentives a platform faces here and the reasonable reasons they might say less than everything (leaving aside self-interested reasons like avoiding outside scrutiny and saving embarrassment over shortcomings such as arguably inconsistent application of moderation rules or a deficient process for creating them).
First, transparency is sometimes in tension with the privacy of not just users of a service, but any person who winds up the subject of UGC. Just as the public, users, regulators, and academics are asking platforms to increase transparency, the same groups have made equally clear that platforms should take people's privacy rights seriously. The legal and public relations risks of sharing information in a way that abridges someone's privacy are often uncertain and potentially large. This does not mean they cannot be outweighed by transparency values, but I think in order to weight them properly, this tension has to be acknowledged and thought through. In particular, however anonymized a given data set is, the risks of de-anonymization increase with time as better technologies come to exist. Today's anonymous data set could easily be tomorrow's repository of personally identifiable information, and platforms are acting reasonably when choosing to safeguard these future and contingent rights for people by sometimes erring on the side of opacity around anything that touches user information.
Second, in some cases, publicizing detailed information about a particular moderation decision risks maintaining or intensifying the harm that moderation was intended to stop or lessen. If a piece of content is removed because it violates someone's privacy, then publicizing information about that takedown or redaction risks continuing the harm if the record is no carefully worded to exclude the private information. Or, in cases of harassment, it may provide information to the harasser or the public (or the harasser's followers, who might choose to join in) for that harassment to continue. In some cases, the information can be described at a sufficiently high level of generality to avoid harm (e.g., "a private person's home address was published and removed" or "pictures of a journalist's children were posted and removed). In other cases, it may be hard or impossible (e.g., "an executive at small company X was accused of embezzling by an anonymous user"). Of course, generalizing at too high a level may frustrate those seeking greater transparency as not much better than not releasing the information at all.
Finally, in some cases publicizing the details a moderation team's script or playbook can make the platform's rules easier to break or hack by bad faith actors. I don't think these are sufficient reason to perpetuate existing confidentiality norms. But, if platform companies are being asked or ordered to increase the amount of public information about content moderation and plan to do so, they may as well try to proceed in a way that will account for these issues.
Transparency Reports
Short of the granular information discussed above, many UGC platforms already issue regular transparency reports. Increasing expectations or commitments about what should be included in transparency reports could wind up an important way to move confidentiality norms while also ensuring that the information released is structured and meaningful.
With some variation, I've found that the majority of UGC platform transparency reports cover information across two axes. The two main types of requests are to remove/alter content and information requests. And then, within each of those categories, whether a given request comes from a private person or a government actor. A greater push for transparency might mean adding categories to these reports with more detail about the content of requests and the procedural steps taken along the way rather than just the usually binary output of "action taken" or "no action taken" that one finds in these reports, such as the law or platform rule that is the basis for removal, more detail about what relevant information was taken into account (such as, "this post was especially newsworthy because it said ..." or "this person has been connected with hate speech on [other platform]"). As pressure to filter or proactively filter platform content increases from legislators from places like Europe and Hollywood, we may want to add a category for removals that happened based on a content platform's own proactive efforts,, rather than a complaint.
Nevertheless, transparency reports as they are currently done raise questions about how to meaningfully interpret them and what can be done to improve their usefulness.
A key question I think we need to address moving forward: are the various platform companies' transparency reports apple-to-apples in their categories? Being able to someday answer yes would involve greater consistency in terms by industry (e.g, are they using similar terms to mean similar things, like "hate speech" or "doxxing," irrespective of their potentially differing policies about those types of content).
Relatedly, is there a consistent framework for classifying and coding requests received by each company. Doing more to articulate and standardize coding though maybe unexciting will be crucial infrastructure for providing meaningful classes and denominators for what types of actions people are asking platform companies to take and on what ground. Questions here include, is there relative consistency in how they each code a particular request or type of action taken in response? For example, a demand email with some elements of a DMCA notice, a threat of suit based on trademark infringement, an allegation of violation of rules/TOS based on harassment, and an allegation that the poster has action in breach of a private confidentiality agreement? What if a user makes a modification to their content of their own volition based on a DMCA or other request? What is a DMCA notice is received for one copy of a work posted by a user account, but in investigating, a content moderator finds 10 more works that they believe should be taken down based on their subjective judgment of the existence of possible red flag knowledge?
Another question is how to ensure the universe of reporting entities is complete. Are we missing some types of companies and as a result lacking information on what is out there? The first type that comes to mind is nominally traditional online publishers, like the New York Times or Buzzfeed, who also host substantial amounts of UGC, even if it is not their main line of business. Although these companies focus on their identity as publishers, they are also platforms for their own and others' content. (Section 3 of the Times' Terms of Service) spells out its UGC policy, and Buzzfeed's Community Brand Guidelines explain things such as the fact that a post with "an overt political or commercial agenda" will likely be deleted).
Should the Times publish a report on which comments they remove, how many, and why? Should they provide (voluntarily, by virtue of industry best practices, or by legal obligation) the same level of transparency major platforms already provide? If not, why not? (Another interesting question – based on what we've learned about the benefits of transparency into the processes by which online, content is published or removed, should publisher/platforms perhaps be encouraged to also provide greater transparency into non-UGC content that is removed, altered, or never published by virtue what has traditionally been considered editorial purview, such as a controversial story that is spiked at the last minute due to a legal threat or factual allegations removed from a story for the same reason? And over time, we can expect that more companies may exist that cannot be strictly classified as publisher or platform, but which should nevertheless be expected to be transparent about its content practices.) Without thinking through these question, we may lack a full data set of online expression and lose our ability to aggregate useful information about practices across types of content environments before we've started.
Alex Feerst is the Head of Legal at Medium
Reader Comments
Subscribe: RSS
View by: Time | Thread
Companies being opaque and arbitrary is actually a good thing since it creates disincentives to use their product and encourages competitors to be more open to gain an advantage in the marketplace.
[ reply to this | link to this | view in chronology ]
Re:
Content moderation is speech. Asking for regulation is attempting to limit free speech rights.
You don't really believe that, do you? You don't have a free speech right to use a private platform, you know?
Companies being opaque and arbitrary is actually a good thing since it creates disincentives to use their product and encourages competitors to be more open to gain an advantage in the marketplace.
To some extent, I agree. If a company is bad at moderation (as many are) that opens up great new opportunities for others. But so far, there appears to be basically zero empirical evidence that anyone wants a truly unmoderated platform. Any one that exists and gets any level of popularity will quickly become overwhelmed by spam. And that's not going to gain any advantage in any marketplace.
[ reply to this | link to this | view in chronology ]
Re: Re:
Try telling that to the SovCits.
[ reply to this | link to this | view in chronology ]
Re: Re: Re:
> Try telling that to the SovCits.
Once the King of England tried to rule this nation with force and we kicked him and the German mercenaries out. So, no, little Mikey isn't going to tell me anything.
Masnick is at this instant providing me user of "his" platform for my free speech, so de facto, he's wrong.
[ reply to this | link to this | view in chronology ]
Re: Re: Re: Re:
’tis better to keep your mouth closed and let people believe you are a fool than to open your mouth and prove them right.
[ reply to this | link to this | view in chronology ]
Re: Re: Re: Re: Re:
Oooh, burned me! -- Got anything even vaguely on topic, kid?
[ reply to this | link to this | view in chronology ]
Re: Re: Re: Re: Re: Re:
You first.
[ reply to this | link to this | view in chronology ]
Re: Re:
I don't understand the confusion, that's the argument. Platform owners (i.e. Facebook) participate in content moderation, a form of speech just as a newspaper decides which classified ads to run, or a niche dating website decides which sexual orientation or ethnicity to cater to.
To the extent that content moderation doesn't drive consumer behavior signals how little, like privacy, the majority of end users value their freedoms.
[ reply to this | link to this | view in chronology ]
Re: Re: Re: "I don't understand the confusion, that's the argument." -- It results from corporations trying to put over fictions,
If you just remove the assertions that corporations EVER planned to "don't be evil", then you won't be confused any more. Corporations are nothing but money machines intent on total control: they're Royalist Authoritarians in new form.
[ reply to this | link to this | view in chronology ]
Re: Re: Re: Re: "I don't understand the confusion, that's the argument." -- It results from corporations trying to put over fictions,
A platform need not be neutral to be a platform. Stormfront has no obligation to post pro-Black Lives Matter speech any more than The Root has an obligation to host articles promoting White supremacy.
[ reply to this | link to this | view in chronology ]
Re: Re: "Private platforms" don't have a right to exist without Permission from The Public.
Corporations operating "platforms" ASK The Public to even form. Our officials give permission based on corporations SERVING The Public. When corporations -- or even government -- does not serve The Public's purposes, then The Public has full right to use whatever force is necessary to bend them to our purposes. -- Read the Constitution. Corporates are not in it: are entirely an invention of fiends in human form called lawyers. -- Read the Declaration Of Independence for authority of We The People to disband even government and form it anew.
Masnick, you're a Royalist and a Tribalist, besides Corporatist.
[ reply to this | link to this | view in chronology ]
Re: Re: Re: "Private platforms" don't have a right to exist without Permission from The Public.
Correct me if I am wrong, but this sounds like you believe people should be able to use violence as a means of forcing corporations to serve the whims of whatever mob gets to the CEOs first.
You can argue whether the fiction of corporate personhood is good or bad, or even necessary. Until such time as you can undo the laws surrounding corporate personhood, however, that fiction is part of our reality. You might want to try adapting to it.
Incidentally, a lot of the laws and legal precedents that govern us today are not in the Constitution. After all, that document never explicitly mentions privacy, abortion, minimum mandatory prison sentences, or the criminalization of marijuana.
All hail King Incoporated Pro-Skub!
[ reply to this | link to this | view in chronology ]
Re: Re: Re: Re: "Private platforms" don't have a right to exist without Permission from The Public.
> Correct me if I am wrong, but this sounds like you believe people should be able to use violence as a means of forcing corporations to serve the whims of whatever mob gets to the CEOs first.
For once you're right. What the heck do you think police are, and are for? To use violence WHEN NECESSARY to serve We The People's purposes. I'd prefer mere court action, though. But there's always the threat. In France, Le Peuple hauled out the King and chopped his arrogant head off.
> "You can argue whether the fiction of corporate personhood is good or bad, or even necessary." -- THAT IS WHAT I'M ARGUING, and you are saying I don't even have a right to! That'd be an easy win for corporatists, eh, especially when They control ALL the major "platforms".
> Until such time as you can undo the laws surrounding corporate personhood, however, that fiction is part of our reality. You might want to try adapting to it." -- Well, people can "adapt" to any form of tyranny, but it's not The American Way to just accept what The Rich try to force on you. -- And again, you're just arguing that I can't argue, besides implying it's futile, "you will be assimilated" and all that.
You have a serf mentality, as I've stated. Move to England where you can grovel before Persons. -- Even better, join a cult, and worship some arrogant idiot as a god.
Not me.
[ reply to this | link to this | view in chronology ]
Re: Re: Re: Re: Re: "Private platforms" don't have a right to exist without Permission from The Public.
That you believe the ends of “bending corporations to the will of the people” justifies the means of “violence against corporate executives” is frightening.
No, I have a mindset that tells me the law is the law, regardless of whether it is just or fair—and that I cannot change the law by disobeying it. I mean, when was the last time a SovCit changed laws surrounding driver’s licenses by driving a car without one?
No thanks; I prefer to vote for Democrats and Independents.
[ reply to this | link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: "Private platforms" don't have a right to exist without Permission from The Public.
Sheesh. What the hell was the Civil Rights Movement about again? Why did those people, besides military risk their lives? -- You are giving ALL up all we've gained in civilization, kid, from Magna Carta on, if don't oppose corporate power.
Oh, and because I think recall fits you: you can wear a PINK badge, so everyone KNOWS.
[ reply to this | link to this | view in chronology ]
Re: Re: Re: Re: Re: "Private platforms" don't have a right to exist without Permission from The Public.
Oh, wait -- maybe you have! You seem unable to question Masnick. You are a lost little little puppy who found a rabid skunk and accept it as your "alpha".
[ reply to this | link to this | view in chronology ]
Ongoing and increasing censorship from globalist corporate "platforms" using the non-Constitutional lawyer's fiction of a "First Amendment Right" to suppress yours:
This link is not new, but is apropos enough and I'm tired of this topic, as clearly are all but one rabid fanboy. Tomorrow is the 2nd when the weenies gather and natter their one viewpoint, no opposition or dissent, exactly as the commies they are.
"Bridget Johnson, who specializes in covering issues related to terrorism, was reportedly suspended from Twitter not for posting an offensive tweet or for arguing with anyone, but just because - as Twitter has offered Johnson no explanation for this sudden act of censorship."
http://investmentwatchblog.com/twitter-just-silenced-pj-medias-terrorism-editor-in -latest-censorship-scheme-to-eliminate-non-liberal-points-of-view/
Masnick often slips in this alleged Right of corporations to censor "natural" persons arbitrarily and unaccountably. Masnick is a complete corporatist.
Oh, and transparency? You cannot even get answer to whether there IS a Moderator here!**
[ reply to this | link to this | view in chronology ]
Re: Ongoing and increasing censorship from globalist corporate "platforms" using the non-Constitutional lawyer's fiction of a "First Amendment Right" to suppress yours:
To quote Mr. Masnick himself: “You don't have a free speech right to use a private platform, you know?”
[ reply to this | link to this | view in chronology ]
Re: Re: Ongoing and increasing censorship from globalist corporate "platforms" using the non-Constitutional lawyer's fiction of a "First Amendment Right" to suppress yours:
A) I'd already quoted that at least twice.
B) You have now proved that you can use copy-paste. Nothing else out of you, not a single thought of your own on topic.
C) Within 5 minutes, you replied at me on this nearly dead topic. Evidently you monitor the site fanatically. I think you've just made top fanboy, congrats.
[ reply to this | link to this | view in chronology ]
Re: Re: Re: Ongoing and increasing censorship from globalist corporate "platforms" using the non-Constitutional lawyer's fiction of a "First Amendment Right" to suppress yours:
I don’t need to waste my brainpower with original thought when copypasting a cogent argument, one for which you have no rebuttal, will do the trick.
[ reply to this | link to this | view in chronology ]
Yeah, you just re-phrased my comment. This is, however, a rebuttal:
Still no thought on topic from you, as you admit.
[ reply to this | link to this | view in chronology ]
Add Your Comment