Facebook reports spike in posts taken down for promoting hate

Facebook
(Image credit: Shutterstock)

Having set the standards to combat online hate and terrorism last year, Facebook has now reported a sharp spike in the number of posts that it automatically removed for promoting violence and hate speech across its suite of apps. 

In a blog post, the company said it had done away with 9.6 million posts containing hate speeches during the first quarter of 2020 as against 5.7 million in the prior period. In addition, it also removed 4.7 million posts associated with hate organisations as against 1.6 million in the previous quarter. 

Facebook said these posts were removed automatically following enhancements to the technology it uses to identify such posts using images and text. Additionally, it has also added warning labels to 50 million content bits associated with Covid-19, in addition to banning harmful misinformation about the pandemic. 

Last September, the company had announced a series of algorithm updates to improve the way it combated terrorists, violent extremist groups and hate organisations on Facebook and Instagram. Since then, the company has provided details of takedowns on its parent platform, Instagram, Facebook Messenger and WhatsApp

The company made similar progress on Instagram where detection rate rose from 57.6% to 68.9% with 175,000 pieces of content getting taken down during the first quarter. In addition, the company also could identify where content related to one problem is distinct from the other. 

Learnings from Takedowns

"For example, we have seen that violations for organised hate are more likely to involve memes while terrorist propaganda is often dispersed from a central media arm of the organisation and includes formalised branding. Identifying these patterns helps us continue to fine tune the systems for detecting organised hate and terrorist content," the blog post said. 

The latest update is the fifth Community Standards Enforcement Report, a process that Facebook began in 2018 alongside stringent rules of posting content. This was an outcome of the backlash the company faced over overseeing content that gets posted on its platforms that also includes Facebook Messenger and WhatsApp. 

The company says it is now able to detect text embedded in images and videos in order to understand the full context. It also revealed the presence of media matching technology that finds content identical or near-identical to photos, videos, text and audio files that have already been removed. 

The Oversight board

In the wake of the Christchurch (New Zealand) attacks last year in March, Facebook had announced that they would create a multi-disciplinary group of safety and counter terrorism experts for developing policies and building product innovations to help define, identify and remove content that drives hate and crime. 

Facebook announced the formation of its long-awaited oversight board last week with the first 20 members coming on board. This international panel included journalists, a former prime minister, a Nobel laureate, lawyers, counterterrorism experts who would have the final say in content moderation discussions for the world's largest social media platform. 

The team now comprises 350 people with expertise ranging from law enforcement and national security, to counterterrorism intelligence and academic studies in radicalisation, the company said.