Facebook admits finding, banning hate speech is one of its weaknesses
Facebook CEO Mark Zuckerberg makes the keynote speech at F8, theFacebook's developer conference, Tuesday, May 1, 2018, in San Jose, Calif.
Marcio Jose Sanchez/APSAN FRANCISCO — Getting rid of racist, sexist and other hateful remarks on Facebook is challenging for the company because computer programs have difficulties understanding the nuances of human language, the company said Tuesday.
In a self-assessment, Facebook said its policing system is better at scrubbing graphic violence, gratuitous nudity and terrorist propaganda. Facebook said automated tools detected 86 percent to 99.5 percent of the violations in those categories.
READ MORE: Facebook suspends 200 apps in data misuse investigation — here’s what we know so far
For hate speech, Facebook’s human reviewers and computer algorithms identified just 38 percent of the violations. The rest came after Facebook users flagged the offending content for review.
Tuesday’s report was Facebook’s first breakdown of how much material it removes. The statistics cover a relatively short period, from October 2017 through March of this year, and don’t disclose how long, on average, it takes Facebook to remove material violating its standards. The report also doesn’t cover how much inappropriate content Facebook missed.
WATCH: Facebook executives grilled over Canadian data collection

Facebook said it removed 2.5 million pieces of content deemed unacceptable hate speech during the first three months of this year, up from 1.6 million during the previous quarter. The company credited better detection, even as it said computer programs have trouble understanding context and tone of language.
Facebook took down 3.4 million pieces of graphic violence during the first three months of this year, nearly triple the 1.2 million during the previous three months. In this case, better detection was only part of the reason. Facebook said users were more aggressively posting images of violence in places like war-torn Syria.
WATCH: Facebook introduces ‘clear history’ and ‘watch party’ features

The increased transparency comes as the Menlo Park, California, company tries to make amends for a privacy scandal triggered by loose policies that allowed a data-mining company with ties to President Donald Trump’s 2016 campaign to harvest personal information on as many as 87 million users. The content screening has nothing to do with privacy protection, though, and is aimed at maintaining a family-friendly atmosphere for users and advertisers.
READ MORE: Canada has ‘fallen behind’ in powers to do battle over privacy with tech giants, U.K. official says
The report also covers fake accounts, which has gotten more attention in recent months after it was revealed that Russian agents used fake accounts to buy ads to try to influence the 2016 elections.
Facebook previously estimated fake accounts as accounting for 3 percent to 4 percent of its monthly active users. Tuesday’s report said Facebook disabled 583 million fake accounts during the first three months of this year, down from 694 million during the previous quarter. Facebook said the number tends to fluctuate from quarter to quarter. Facebook said more than 98 percent of the accounts were caught before users reported them.
© 2018 The Canadian Press
Editor's Picks

'All I knew was that I couldn’t walk anymore:’ Canadians living with MS

Multiple sclerosis in Canada: Understanding why MS rates are the highest here

EXCLUSIVE: Canada's plan for managing the return of ISIS fighters revealed in documents

2018 Ontario election promise tracker: Here's what the Liberals, PCs, NDP and Greens have pledged so far

Tick forecast 2018: Experts predict more Lyme disease in Canada

Plain legal pot packaging not doing Canadian consumers any favours, report says

Comments
Want to discuss? Please read our Commenting Policy first.