In Facebook’s ongoing fight with fake news, it employs a number of methods to help weed out falsehoods from the truth, but now we know the company’s method for verifying the verifiers.
An article from the Washington Post has revealed that Facebook assigns its users a reputation score between 0 and 1. When a user flags a post as containing false news, their score is adjusted based on the veracity of this tip.
So, for users who consistently flag posts that genuinely contain fake news, their score will rise and Facebook will lend their future tips more weight, whereas the opposite is true of those that continuously report legitimate news.
Who watches the watchmen?
The score has been introduced to combat the growing problem known as ‘brigading’, which sees organized groups of trolls or political activists deliberately flag posts in the hopes to suppress or censor them.
Despite the fact that Facebook is now using third-party fact checkers to respond to each of these flags, if they are continuous and made in high quantities, the post can experience reduced visibility, regardless of their legitimacy.
Speaking with the Washington Post, Facebook’s product manager for fighting fake news, Tessa Lyons, stressed that this score was just one of the methods the company uses to check the reliability of fake news reports and that it wasn't the sole indicator of a user's trustworthiness.
At present, this is strictly an internal measure and there is no way for users to check their reputation score, a precaution likely put in place to avoid any further gaming of Facebook’s systems to drive personal or political agendas.