Facebook says no one flagged NZ mosque shooting livestream

AP  |  London 

says none of the 200 or so people who watched live video of the mosque shooting flagged it to moderators, underlining the challenge tech companies face in policing violent or disturbing content in real time.

removed the video "within minutes'" of being notified by police, said Chris Sonderby, Facebook's

"No users reported the video during the live broadcast," and it was watched about 4,000 times in total before being taken down, Sonderby said. "We continue to work around the clock to prevent this content from appearing on our site, using a combination of technology and people."

has previously said that in the first 24 hours after the massacre, it removed 1.5 million videos of the attacks, "of which over 1.2 million were blocked at upload," implying 300,000 copies successfully made it on to the site before being taken down.

The video's puts renewed pressure on Facebook and other sites such as YouTube and over their content moderation efforts. Many question why Facebook in particular wasn't able to more quickly detect the video and take it down.

On Tuesday, expressed frustration that the footage remained online four days after the killings. She said she had received "some communication" from Facebook's on the issue.

"It is horrendous and while they've given us those assurances, ultimately the responsibility does sit with them." Facebook uses and to detect objectionable material, while at the same time relying on the public to flag up content that violates its standards.

Those reports are then sent to human reviewers who decide what action to take, the company said in a video in November , which also outlined how it uses "computer vision" to detect 97 percent of graphic violence before anyone reports it. However, it's less clear how these systems apply to Facebook's live streaming.

To report live video, a user must know to click on a small set of three gray dots on the right side of the post. When you click on "report live video," you're given a choice of objectionable content types to select from, including violence, bullying and harassment. You're also told to contact in your area if someone is in immediate danger.

Before the company was alerted to the video, a user on 8chan had already posted a link to copy of it on a site, Sonderby said. 8chan is a dark corner of the web where those disaffected by mainstream sites often post extremist, racist and violent views.

In another indication of the video's spread by those intent on sharing it, the Global Forum to Counter Terrorism, a group of global companies led by Facebook, YouTube, and Twitter, said it added more than 800 different versions to a shared database used to block violent terrorist images and videos.

The group said it added "digital fingerprints" for visually distinct versions of the video to its database. The move came in response to attempts by users to share the video by editing or repackaging versions with different digital fingerprints to avoid detection.

"The incident highlights the importance of industry cooperation regarding the range of terrorists and violent extremists operating online," said the group, which was formed in 2017 in response to official pressure to do more to fight

(This story has not been edited by Business Standard staff and is auto-generated from a syndicated feed.)

First Published: Tue, March 19 2019. 22:45 IST