Tech companies have long been facing continued criticism over their lack of corporate social responsibility for not acting on the fake news and hate speech hosted on their sites. In attempt to be transparent, this time last year Facebook released information about how they were trying to tackle hate speech.

In a post titled 'Who Should Decide What Is Hate Speech in an Online Global Community?' Richard Allan, Facebook's VP of EMEA Public Policy revealed that the company removes an average of 66,000 posts reported as hate speech a week.

Though that might sound a respectable figure in response to a Sisyphean task, questions about the content that doesn't violate their hate speech policy continue to be asked of the social media site.

Last night Channel 4 aired an undercover investigation into the spurious ethics of Facebook's content moderation. The programme, Inside Facebook: Secrets of the Social Network, delved into what it takes to remove content and why seemingly blatant racist or violent material remains on the site.

Video footage for the programme was obtained by a reporter who went undercover as a Facebook moderator at CPL Resources - a content moderation contractor based in Dublin that has worked with Facebook since 2010.

One video secretly recorded inside a training session on hate speech showed a racist meme which reads 'When your daughters [sic] first crush is a little negro boy' with an illustration of a mother drowning her white daughter in a bath. The image, which has been taken down since the documentary aired, was allowed to circulate on the site "for a while".

"This is an ignore because it implies a lot but to reach the actual violation you have to jump through a lot of hoops to get there," the content moderation trainer in the video explains. "There's no attack actually on the negro boy, it's implied."

xView full post on X

In another distressing moment it was revealed that a video of a man beating a young boy was "marked as disturbing" but left on the site for years.

In the Channel 4 footage, the employee giving the training session is asked what procedures are in place after marking the child abuse content as disturbing. He explains that action is only taken when "it meets our escalation criteria" adding that until then, "as far as we're concerned it's just junk floating around the internet"

Channel 4 says that their investigation found that unless video content was streamed live to Facebook, "it does not usually report videos of physical child abuse to the police"

Viewers were outraged to discover that Facebook remove all content where nipples were exposed, even in images of breastfeeding, but are willing to ignore instances of violence against children.

In discussing an inflammatory post about Muslims which read "Muslim immigrants should "f*** off back to (their) own country" the moderator recommended ignoring the content because, "they’re still Muslims but they’re immigrants, so that makes them less protected."

The investigation also found the Facebook page of jailed right-wing campaigner Tommy Robinson was given special allowances meaning 'frontline moderators' were not allowed to remove material from their pages even if they violate Facebook policies.

Facebook's willingness to protect pages with huge numbers of followers despite the content they post was summed up by a moderator who commented, "Obviously they have a lot of followers so they're generating a lot of revenue for Facebook," of the now-deleted Britain First page.

Facebook's reticence to act might seem shocking but the company has long been trying to avoid wading into judging the ethics of the content hosted on their platform. These decisions are at the heart of struggle going at the likes of Facebook, YouTube and Twitter as they try to protest they are technology companies to avoid the regulation that comes with being a media company.


In Zuckerberg's testimony to congress earlier this year he was asked about what Facebook was doing to moderate content on the site. There he peddled the same answer given previously that the company was increasing the number of human moderators and investing in artificial intelligence to aid the process.

All of which might suggest a commitment to saving the site from drowning in videos of children being abused and photoshopped images of Britain overrun by Muslims. However as long as Facebook's policy on hate speech remains so inconsistent and apathetic it's hard to imagine much will change.

Revealing the real reason Facebook remain resistant to tackling hate speech a moderator told the hidden camera, "If you start censoring too much then people stop using the platform. It's all about money at the end of the day."