Instagram won’t recommend content that vaguely violates its Community Guidelines

Image: TechRadar

Over the last two years, Facebook has experienced growing criticism concerning how it handles the spread of hate speech and misinformation. While the social media giant has made attempts at managing those issues – and even recently admitted that it's open to putting regulations in place to restrict live streaming on the platform – it is still seemingly struggling to keep on top of the problem.

As part of its ongoing efforts, Facebook has today kicked off a huge campaign to regulate the content on its suite of sites, including both Instagram and Messenger. 

Called ‘Reduce, reform, inform’, the new campaign lists the steps Facebook is taking to “manage problematic content”. This strategy is aimed at “removing content that violates [the company’s] policies, reducing the spread of problematic content that does not violate [Facebook’s] policies, and informing people with additional information so they can choose what to click, read or share”.

Insta-vague

With Instagram part of this campaign, Facebook says that the photo-sharing platform is “working to ensure that the content [recommended] to people is both safe and appropriate for the community”. 

Instagram has updated its Community Guidelines to reflect the changes, saying it will limit the exposure of posts it considers inappropriate by not recommending them in the Explore or hashtag pages. 

Unfortunately, Instagram isn’t clearly defining what it deems ‘inappropriate’. According to TechCrunch, the definition includes anything that’s “violent, graphic/shocking, sexually suggestive, misinformation and spam content can be deemed ‘non-recommendable’”.

So, if a post is sexually suggestive, even if it doesn’t depict nudity or a sexual act, it could be demoted in the Explore page and from the hashtag search. Instagram does clarify that such posts will be visible to an account's followers, just not to the general public.

Get by with a little help from AI

Instagram has begun training its content moderators to flag borderline content, with the company’s head of product discovery, Will Ruben, saying machine learning AI is already being used to determine if posts deserve to be recommended or not.

The news has been met with mixed reactions from content creators, many of whom depend on the Explore page and hashtags – both areas where platform recommendations are key – to find new followers. Some creators are understandably concerned that the changes will diminish the reach of their posts in these areas, and will thus affect their ability to earn revenue from monetized posts.

Sharmishta Sarkar
Managing Editor (APAC)

While she's happiest with a camera in her hand, Sharmishta's main priority is being TechRadar's APAC Managing Editor, looking after the day-to-day functioning of the Australian, New Zealand and Singapore editions of the site, steering everything from news and reviews to ecommerce content like deals and coupon codes. While she loves reviewing cameras and lenses when she can, she's also an avid reader and has become quite the expert on ereaders and E Ink writing tablets, having appeared on Singaporean radio to talk about these underrated devices. Other than her duties at TechRadar, she's also the Managing Editor of the Australian edition of Digital Camera World, and writes for Tom's Guide and T3.