Banning Trump won’t fix social media: 10 ideas to rebuild our broken internet – by experts
It was nearing midnight on Tuesday 12 January when the final plank of Donald Trump’s social media platform fell away. Facebook, Instagram, Twitter, Twitch, Snapchat and, lastly, YouTube had all come to the identical conclusion: that their platforms – multibillion greenback American firms that dominate American political discourse – couldn’t be safely used by the president of the United States.
In lower than per week, a brand new president will take workplace. But contemplating the function social media performed in elevating Trump to the presidency and its half in spreading misinformation, conspiracy theories and requires violence, it’s clear that the top of the Trump presidency received’t present an instantaneous fix. There is one thing basically broken in social media that has allowed us to attain this violent juncture, and the de-platforming of Trump is just not going to handle these deeper pathologies.
Much of the following debate in regards to the Trump bans has performed out alongside predictable and unproductive strains, with free speech absolutists refusing to acknowledge that harassment and hate lead to the silencing of marginalized teams and people who help more durable crackdowns tending to elide good-faith issues about overly aggressive censorship. It’s a struggle we’ve been having in regards to the internet for thus lengthy that folks on both facet can most likely recite the strains of the opposite from reminiscence.
Still, away from the clamor of social media, significant and progressive work is being finished by researchers and activists who’ve been dwelling with and finding out the brokenness of social media for years now. We requested a dozen of them to assist us transfer the talk ahead by sharing their proposals for concrete actions that may and needs to be taken now to stop social media platforms from damaging democracy, spreading hate, or inciting violence.
The result’s this record of 10 ideas to handle our broken internet. Some of those proposals are for the tech firms, some would want to be enacted by governments, and a few are challenges for us all. None could be a panacea, however any of them could be an excellent begin. At the very least, they may help us transfer past the strictures of the ossified free speech debate and begin having higher, extra nuanced discussions in regards to the path ahead.
Some responses have been evenly edited for size and readability.
Hire 10,000 librarians for the internet
We want 10,000 librarians employed with the mandate of fixing our info ecosystem. This workforce could be world in measurement, however native in scope, centered on constructing techniques for curating well timed, native, related, and correct info throughout social media platforms. Their work could be comparable to the general public curiosity obligation already utilized to radio, the place we legally require “broadcasting serve the public interest, convenience and necessity” as a situation of getting entry to the airwaves. Social media firms ought to tune into the frequency of democracy, reasonably than the static of disinformation.
Joan Donovan, analysis director on the Harvard Kennedy School’s Shorenstein Center on Media, Politics and Public Policy
Fund coaching for lecturers, our ‘informational first responders’
Social media feeds individuals lies, hate, and radicalizing messages. But individuals additionally convey lies, hate, and radicalizing messages to social media – or not less than carry these pursuits with them after they go surfing, in flip triggering suggestion algorithms to serve up false, bigoted, and radicalized choices. Something wants to be finished about all these beliefs and selections too – as a result of in any other case, even essentially the most sweeping technological or regulatory fixes can be treating the signs, not the underlying causes.
We want funding for empirical, longitudinal analysis into media literacy schooling, and funding for the event of collaborative media literacy curriculum. We additionally want funding for analysis to assess what media literacy coaching Okay-12 lecturers themselves want, and funding to be sure that lecturers throughout topic space obtain that coaching to be sure that they’re outfitted for his or her function as informational first responders. We do an amazing disservice to our lecturers when the message is: right here’s nothing, now go save democracy.
To be clear, instructional responses aren’t extra vital than technological responses. They’re mutually reinforcing. When extra individuals have the suitable media literacy coaching, there can be much less air pollution on social media that wants cleansing up. That will make it simpler to see the place the breaks are, and the way to goal regulatory and platform coverage options as precisely as doable.
Whitney Phillips, co-author of You Are Here: A Field Guide for Navigating Polarized Speech, Conspiracy Theories, and Our Polluted Media Landscape
Understand the restrictions of the primary modification …
Considering treatments for the proliferation of white supremacist harassment and organizing on-line requires that society de-colonize our assumptions about racism, speech, dissidence and violence. Content administration guidelines, the rule of regulation and large tech earnings should all be grounded in three underlying assumptions:

1. First modification protections aren’t equally shared throughout all members of American society. Black, indigenous and different individuals of colour within the US have been traditionally and structurally censored in regulation and de facto social norms. The suppression of Black dissent by police violence, for instance, has been on more and more seen show. It is just not sufficient to have the proper to free speech; one will need to have the ability to train it as nicely. As a consequence, merely counting on free speech protections is inadequate to defend everybody’s free speech on-line.
2. It isn’t merely speech that’s being acted upon because the litmus check for social media bans. It’s the real-world influence of that speech. White supremacists aren’t being kicked off platforms due to phrases, they’re being denied the power to amplify violent racism and manage actual world hurt.
3. Protecting speech and defending democracy are equally vital however not at all times aligned. We can’t perceive or take care of freedom with out additionally coping with energy and governance.
Americans are hooked up to the parable that the United States is uniquely protecting of free speech. But the American view of free speech, exported to the world by the domination of US-based tech firms, is simpleminded and elitist: free speech means making certain that essentially the most violent, racist, misogynist people be happy to communicate with out penalties, regardless of the loss of life, destruction, and silencing of different individuals’s speech that outcomes. Even within the midst of an insurrectionist rebellion brazenly fueled and arranged on social media platforms, the loudest issues are about what these platforms are lastly taking down, not what they’ve been leaving up. The tech business’s claimed dedication to free speech, strengthened by the sweeping immunity it receives by Section 230, contributes to the American public’s confusion between authorities motion proscribing speech, which implicates the First Amendment, and personal motion, which doesn’t, in addition to contributing to the erasure of the excellence between speech and conduct and the excellence between protected speech and unprotected speech.
… and suppose past the US and Europe
From my perspective, step one is to broaden the dialog. For too lengthy, we’ve seen platform points from a US- and Euro-centric framework, when the worst implications of dangerous regulation and governance happen within the so-called world south. Companies ought to instantly develop their consultations to be really inclusive of a extra various vary of civil society. Furthermore, it’s excessive time that firms undertake the Santa Clara Principles on Transparency and Accountability in Content Moderation, that are supported by dozens of digital and human rights teams around the globe and current a set of baseline minimal requirements that embrace public-facing transparency about how guidelines are applied and content material is amplified, discover to customers, and a path for treatment when content material moderation errors are made.
Protect the journalists and researchers who research platforms
We can’t fix what we don’t perceive. The social media firms are shaping public discourse by algorithmically amplifying the speech that’s most definitely to maximize consumer “engagement”. This resolution to prioritize “engagement” seems to be fueling a damaging suggestions loop that promotes the unfold of hate, misinformation, and propaganda.
Unfortunately, we don’t totally perceive this phenomenon, or how we’d fix it, largely as a result of the social media firms usually forbid impartial analysis into their platforms that depends on the fundamental instruments of digital journalism and analysis. Facebook’s phrases of service, for instance, forbid journalists and researchers from amassing info from the platform by “automated means”, even when that analysis scrupulously respects consumer privateness and could be within the public curiosity.

There are not less than 3 ways to overcome this barrier to impartial analysis. First, the businesses ought to amend their phrases of service to create a protected harbor for privacy-respecting analysis within the public curiosity. Second, if they don’t, Congress and different legislative our bodies ought to think about laws immunizing bona fide investigations of the platforms. And third, courts ought to refuse to implement the social media firms’ phrases of service in opposition to analysis tasks that respect consumer privateness and could be manifestly within the public curiosity.
Change suggestion algorithms to promote correct info – and reward those that struggle on-line harms
Companies ought to consider and make modifications to how they advocate content material and their complete promoting system. Indoctrination into hate ideologies on-line is usually propelled by the businesses themselves recommending more and more extra content material that’s geared in the direction of fortifying your world view, not opening customers up to correct info.
Companies ought to present algorithmic rewards for individuals and teams actively working to fight disinformation and misinformation on-line. There are people who find themselves already actively participating in “technological placekeeping”– or the follow of lively care and upkeep of digital locations, each as a protection mechanism in opposition to manipulation and mis/disinformation and within the service of preserving the well being of their respective neighborhood’s makes use of of knowledge and communication. Just as they search out and work with influencers doing popular culture content material, these firms ought to take into consideration how to incentivize and reward these on the entrance line already doing the work to create protected areas on their platforms.
Brandi Collins-Dexter, visiting fellow on the Harvard Kennedy School’s Shorenstein Center on Media, Politics and Public Policy and senior fellow at Color Of Change

Implement robust guidelines in opposition to harassment, hate, and hurt
1. Content protections: Stop amplifying hate and hurt. Define harassment and legislate its ban from social media. At Reddit in 2015, we outlined it as “systematic and/or continued actions to torment or demean someone in a way that would make a reasonable person (1) conclude that Reddit is not a safe platform to express their ideas or participate in the conversation, or (2) fear for their safety or the safety of those around them”.
2. Privacy protections: Ban unauthorized nude photographs and revenge porn. Ban private identifiable info.
3. For people who find themselves repeat offenders: Ban customers who promote hate and hurt. Track them to be certain that they don’t proceed.
4. For teams of individuals whose content material has been banned: Don’t allow them to re-create the identical downside that has been eradicated in 1 or 2. No new accounts in the event that they’ve been banned. No new pages or subreddits or different sections for a similar teams of individuals.
5. For cross-platform harassment and hurt: Take a web page from the fraud-fighting sector and create a shared database of actually problematic accounts that ought to not get the good thing about the doubt. This ought to cowl youngster pornography, home terrorism, different terrorism, on-line harassment, sharing public info.
Enforce the foundations platforms have already got
In current days, the varied platforms that Trump has used so successfully to domesticate and instantly attain his “base” over the previous 4 years have, in the end, begun implementing their extant guidelines and making use of them to him.
This effort has not taken a drastic shift in platform phrases of service (though inside accepted coverage and follow could also be one other matter). It’s additionally not taken the invention of a robust new machine studying algorithm or the onboarding of 1000’s of latest industrial content material moderators. It has merely been a matter of Trump now not receiving carte blanche to do and say no matter he likes with out consequence, and for the foundations of engagement that apply to all customers of Twitter, of YouTube, of Facebook and of Snapchat, to apply, likewise, to Trump.

What, then, is my grand answer going ahead? That platforms implement the foundations, transparently and with immediacy for all customers, and maintain them accountable for his or her habits on-line. Do this much more so, and never much less, when the particular person on the opposite finish of the tweet holds the world hostage to his mercurial, harmful whims. To have provided Trump a lesser commonplace, to have refused to maintain him to account till the eleventh hour, has put American democracy and the steadiness of the globe on the road and made social media companies complicit within the destabilization from which we’ve got but to emerge.
Sarah T Roberts, PhD, co-director of the UCLA Center for Critical Internet Inquiry and creator of Behind the Screen: Content Moderation within the Shadows of Social Media
Social media platforms should cease giving deference to world leaders, authorities officers, and elected officers who exploit their platforms to incite hate, hurt, and harassment in opposition to their political opposition. Not holding them to the identical commonplace as each different consumer on the platform provides already highly effective individuals one other weapon to wield as a result of they know their content material received’t be eliminated. Donald Trump understood this benefit and used it to nice impact over the course of his presidency to unfold hateful rhetoric, smear his enemies, and name his supporters to DC for what turned a violent tried coup. This is a teachable second for the tech firms, and concrete motion on altering world leaders’ insurance policies has the potential to save the lives of activists, dissidents, and residents throughout the globe.
Address the ‘architectural exclusion’ of marginalized communities from platforms
Some of our content material moderation issues consequence from the architectural exclusion of communities from utilizing platforms and collaborating in discourse on equal phrases. Many communities are affected by this, however by manner of 1 instance, there may be rather more that platforms can do to be accessible to customers with disabilities, together with requiring the availability of authoring instruments for closed captions and picture descriptions, growing interfaces to nudge customers towards making their very own content material accessible, higher growing and tuning general-purpose synthetic intelligence methods like speech-to-text and text-to-speech for accessibility functions, together with and hiring individuals with disabilities within the testing and improvement of latest companies, and implementing compatibility with net and app retailer requirements for accessibility. Enforcing and lengthening current regulatory mandates for accessible design and performance beneath each incapacity and telecommunications regulation and offering broad exemptions in copyright regulation for making user-generated content material obtainable in accessible codecs are vital avenues for addressing these issues.

Reform tech’s legal responsibility protect to create accountability for the conduct – not speech – of customers
Section 230 [the US law that shields tech platforms from liability for third-party content] permits highly effective tech firms to invoke the laissez-faire ideas of the primary modification to absolve themselves of duty for abuse and extremism that flourish on their platforms, undermining the idea of collective duty vital for a functioning society, each on-line and off. Section 230 needs to be amended in order that on-line platforms are now not immunized from legal responsibility for the conduct, as opposed to speech, of their customers, or when these platforms encourage, revenue from, or show deliberate indifference to dangerous content material.
CEOs and senior executives needs to be instantly fined by the Federal Trade Commission for harms that stem instantly from their platform that fairly may have been predicted and addressed.
Brandi Collins-Dexter, visiting fellow on the Harvard Kennedy School’s Shorenstein Center on Media, Politics and Public Policy and senior fellow at Color Of Change