Facebook today said it will review its reporting procedures after a man in Cleveland posted a video of himself shooting and killing an elderly man on Sunday.
"It was a horrific crime — one that has no place on Facebook, and goes against our policies and everything we stand for," wrote VP of Global Operations Justin Osofsky in a blog post (opens in new tab).
Facebook has faced intense backlash because the video of the murder was on its service for more than two hours before being taken down. Steve Stephens, the suspect who uploaded the video as well as two others related to the murder, is still at large [Update: Stephens killed himself Tuesday morning after a police pursuit].
Osofsky detailed the timeline of events, saying the suspect (he did not identify Stephens by name) posted one video announcing his intent to commit murder, another two minutes later of the shooting, and a Facebook Live video in which he confessed to the murder.
Facebook did not receive a report for the first video, Osofsky said. A report for the second video — which showed the shooting — was not received until an hour and 45 minutes after it was posted. Reports for the Facebook Live video did not come in until the five-minute broadcast had ended.
"We disabled the suspect's account within 23 minutes of receiving the first report about the murder video, and two hours after receiving a report of any kind," Osofsky said. "But we know we need to do better."
According to Facebook's timeline, the video of the murder was posted at 11:11am PDT. The first report for the video was received at 12:59pm PDT, and the suspect's account was disabled at 1:22pm PDT, more than two hours after the shooting video was uploaded.
In addition to quickening the reporting process, Facebook also plans to examine how it reviews flagged material.
Its review process currently relies on thousands of people combing over "the millions of items that are reported to us every week in more than 40 languages." Facebook wants to make this process "even faster."
Finally, the social network is looking at the technology it uses to review videos, including artificial intelligence. AI is used to prevent videos from being shared in their entirety, letting users spread awareness or speak out about a video without posting sensitive or graphic content.
"Keeping our global community safe is an important part of our mission," Osofsky concluded. "We are grateful to everyone who reported these videos and other offensive content to us, and to those who are helping us keep Facebook safe every day."