Every few months a familiar piece of political theatre repeats itself. In humble tones, Big Tech executives answer questions from US law makers. Sometimes the questioning is smart and probing. Sometimes the questions are ill-informed showboating.
he result is normally the same. Facebook – Big Tech’s problem child – says it’s sorry for the mess it made, and promises to do better. The other tech firms, look good by comparison and escape relatively unscathed.
On Thursday, Facebook’s Mark Zuckerberg, Twitter’s Jack Dorsey, and Google CEO Sundar Pichai appeared in front of a US House of Representatives committee. The aim was to discuss Big Tech’s role in fomenting and elevating false information, and the regulations that would stop social media promoting this misinformation.
Interestingly, this time around Facebook changed tack. Not only did Mark Zuckerberg stress the efforts his company has made to combat the spread of misinformation, he also proposed what he referred to as “thoughtful reform” of a key piece of US legislation that protects technology companies.
The legislation in question is Section 230 of the Communications Decency Act. Here’s what it says: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
This section may have sufficed for the internet in 1996. But that was before Facebook, Twitter and Google. While it undoubtedly protects freedom of expression and innovation on the internet, it seems hopelessly over-simplistic today.
By suggesting specific reforms Zuckerberg is trying to control the narrative to favour ones that suit Facebook.
So what’s Mark Zuckerberg looking for? Last week he suggested that the protection Section 230 provides should only be available to companies that adhere to a set of best practices.
“Instead of being granted immunity, platforms should be required to demonstrate they have systems in place for identifying unlawful content and removing it,” Zuckerberg said.
He said platforms should have to show they have systems to identify and remove unlawful content. They should not be held liable if a particular piece of content evades detection – that would be impractical for firms operating at the scale of Facebook.
Yes, he does seem to have suggested that Section 230 protections are available to any company that can prove that they are trying really hard.
Who decides what trying sufficiently hard amounts to? This should fall to a third party, says Zuckerberg, and different rules should apply to different sized companies. Zuckerberg also called for transparency around “the processes by which companies make and enforce their rules about content that is harmful but legal”.
Zuckerberg and other Big Tech CEOs seem to accept that Section 230 is going to change. By suggesting specific reforms he’s trying to control the narrative to favour the type of reforms that suit Facebook.
Making Section 230 protections dependent on the technological tools Big Tech has used to solve the problem favours the big guys. But smaller online platforms would balk at these type of reforms, as they would lack the resources to adapt. So there’s a risk of further consolidation of power.
There’s also an absurdity to Zuckerberg’s argument. He is stating that Facebook should be treated as a non-media company, as long as it can prove it has the technology to automate media-style decision-making.
By the same logic, the rules of the road shouldn’t apply to a self-driving car.
But the biggest issue with Zuckerberg’s proposal is that it is divorced from all notions of effectiveness. This gives Facebook a get-out-of-jail card.
If this approach were accepted, Facebook could gain the protection of Section 230 thanks to its efforts to combat misinformation – regardless of the societal damage caused by its core business.
Facebook uses algorithms that amplify what engages people – regardless of whether it appeals to their better nature or worst instincts. The algorithms can’t tell the difference.
Section 230 is a blunt instrument that has created a deeply distorted variable liability marketplace around information and media.
Publishers are not compensated for the additional liability they carry – and technology companies, who socialise the risks around misinformation, avail of protection.
And due to the global reach of technology, this inequitable situation, rooted in US law, has a huge effect on media companies across the world.
It’s positive that the Big Tech companies are engaging in how to change this legislation.
But if policy approaches like Zuckerberg’s are the best they can come up with, they can expect to be appearing before plenty more hearings.