Facebook is a constantly evolving platform, with personal accounts and company pages to fan pages and profiles for pets being generated and updated every day.
But for all that, Facebook has been plagued by issues relating to hate speech, crude, violent, graphic and sexist content. It's received many letters and complaints from individuals and organisations, such as from Women, Action and The Media.
With all this offending content and complaints, numerous attempts at updating its policies and terms and conditions of use have been made by the social media giant. They have even hired people to look over content, flagging and deleting anything deemed questionable.
But it all came to a head when advertisers began pulling ads from Facebook earlier this year because of complaints from users who found company ads next to pages that had abusive, graphic or controversial content.
It's not what it looks like…
Starting from this week, Facebook will: "… implement a new review process for determining which Pages and Groups should feature ads alongside their content."
"Prior to this change, a Page selling adult products was eligible to have ads appear on its right-hand side; now there will not be ads displayed next to this type of content," the company said in its announcement.
While at first pages will be monitored by humans, we can't imagine this to be worthwhile job that many will be jumping for – well, for most.
The company has put together a filter of sorts that will be implemented in the coming weeks, looking out for controversial and questionable pages or groups.
Facebook has said that it "… will build a more scalable, automated way to prevent and/or remove ads appearing next to controversial content."
This kind of technology isn't new to the company, which already uses filters for spam, etc., but this new automated system/filter/robot will be on the lookout specifically for pages and groups that show explicit, abusive and offensive content.
It will be targeting "any violent, graphic or sexual content (content that does not violate our community standards)."
Using an algorithm to identify pornographic or graphic photographs is nothing new, but Facebook's new automated system won't be problem proof.
Robots for porn and violence?
Last year, there was some controversy over the wrongful removal of pictures of women breastfeeding, although Facebook said that these images were removed after users had reported it.
This will be a judgement call that Facebook's automated system will have to take in to account. It will need to be able to decide on "violent, graphic or sexual content" based upon its context, and it may in many instances wrongfully conclude that a page or group is particularly "controversial".
It should also be noted that this review process will only affect pages that already comply with "content that does not violate our community standards". If anyone sees content on Facebook that they find offensive, they should contact Facebook directly.