Facebook has developed a trustworthiness rating for individual users to help it to clamp down on fake news.

It's part of an effort to make the process of reporting fake news less scattershot and open to abuse. The Washington Post has reported that users are rated on a scale between zero and one, so that "malicious" users can be identified. It's not clear how many users have a score, and how the score is arrived at.

Facebook's Tessa Lyons, who is the product manager in charge of fighting misinformation, told the Washington Post that it is "not uncommon for people to tell us something is false simply because they disagree with the premise of a story or they’re intentionally trying to target a particular publisher". So, when a post is reported, Facebook takes this score and many other metrics into account before judging the purpose and intent of a report of fake news.

However, Facebook has kicked against the idea that it is rating its users.

"The idea that we have a centralised 'reputation' score for people that use Facebook is just plain wrong and the headline in the Washington Post is misleading," the BBC reports a spokesperson as saying.

"What we're actually doing: we developed a process to protect against people indiscriminately flagging news as fake and attempting to game the system.

"The reason we do this is to make sure that our fight against misinformation is as effective as possible."