Skip to main content

Instagram won’t recommend content that vaguely violates its Community Guidelines

Image: TechRadar
Audio player loading…

Over the last two years, Facebook has experienced growing criticism concerning how it handles the spread of hate speech and misinformation. While the social media giant has made attempts at managing those issues – and even recently admitted that it's open to putting regulations in place to restrict live streaming on the platform – it is still seemingly struggling to keep on top of the problem.

As part of its ongoing efforts, Facebook has today kicked off a huge campaign to regulate the content on its suite of sites, including both Instagram and Messenger. 

Called ‘Reduce, reform, inform’, the new campaign lists the steps Facebook is taking to “manage problematic content”. This strategy is aimed at “removing content that violates [the company’s] policies, reducing the spread of problematic content that does not violate [Facebook’s] policies, and informing people with additional information so they can choose what to click, read or share”.

Insta-vague

With Instagram part of this campaign, Facebook says that the photo-sharing platform is “working to ensure that the content [recommended] to people is both safe and appropriate for the community”. 

Instagram has updated its Community Guidelines to reflect the changes, saying it will limit the exposure of posts it considers inappropriate by not recommending them in the Explore or hashtag pages. 

Unfortunately, Instagram isn’t clearly defining what it deems ‘inappropriate’. According to TechCrunch, the definition includes anything that’s “violent, graphic/shocking, sexually suggestive, misinformation and spam content can be deemed ‘non-recommendable’”.

So, if a post is sexually suggestive, even if it doesn’t depict nudity or a sexual act, it could be demoted in the Explore page and from the hashtag search. Instagram does clarify that such posts will be visible to an account's followers, just not to the general public.

Get by with a little help from AI

Instagram has begun training its content moderators to flag borderline content, with the company’s head of product discovery, Will Ruben, saying machine learning AI is already being used to determine if posts deserve to be recommended or not.

The news has been met with mixed reactions from content creators, many of whom depend on the Explore page and hashtags – both areas where platform recommendations are key – to find new followers. Some creators are understandably concerned that the changes will diminish the reach of their posts in these areas, and will thus affect their ability to earn revenue from monetized posts.

Sharmishta Sarkar
Sharmishta Sarkar

Sharmishta is TechRadar's APAC Managing Editor and loves all things photography, something she discovered while chasing monkeys in the wilds of India (yes, she studied to be a primatologist but has since left monkey business behind). While she's happiest with a camera in her hand, she's also an avid reader and has become a passionate proponent of ereaders, having appeared on Singaporean radio to talk about the convenience of these underrated devices. When she's not testing cameras and lenses, she's discovering the joys and foibles of smart home gizmos. She also contributes to Digital Camera World and T3, and helps produce two of Future's photography print magazines in Australia.