Shortly after the November 2015 terrorist attacks in France, in which 130 people were killed at a Paris nightclub, Donald Trump had decided he'd seen enough. In a post on Facebook, which he repeated elsewhere, Trump called for a "total and complete shutdown of Muslims entering the United States until our country's representatives can figure out what is going on." Setting aside your opinion on the application of such a ban in the real world, ask yourself this question in more abstract terms: Does that sort of thing amount to hate speech?

It doesn't, at least according to Facebook's censors, who left the post up without deletion. But, as internal Facebook documents uncovered by ProPublica highlight, the standards by which the social media behemoth makes such decisions are often capricious and selective in the manner in which they are applied. Trump's status as a powerful politician may have played a role as well. The result, the report suggests, is that free speech for some—particularly people like Trump—may not be the same free speech that you and I enjoy on their platform. In fact, it appears that Facebook speech rules may not only be more forgiving when it is certain classes of people speaking, but also more restrictive when it is directed back at them. On Facebook, much like everywhere else in history, it's helpful to be a white man.

The new report, like another last month from The Guardian—which attempted to parse what constituted serious and credible threats of violence against certain groups—provides insight into the difficulty of policing the vast network. When asked for comment, Facebook directed Esquire to a statement from Facebook Vice President Richard Allan. In it he said the company removes an average of 66,000 posts reported as hate speech a week, an undoubtedly herculean task, even for a company that large. But it also plays up the almost comically absurd flailing at systematising nebulous philosophical questions into an actionable set of company standards. In the prior report, Facebook employees were advised that statements like, "To snap a bitch's neck, make sure to apply all your pressure to the middle of her throat" were kosher, while something like, "Someone shoot Trump" would not be; the former casting too broad a net to seem like a direct call to action, while the latter, on the other hand, was specific enough to cause concern.

On Facebook, much like everywhere else in history, it's helpful to be a white man.

Confounding matters, in this most recent set of documents, Facebook seems to be hewing in the exact opposite direction. Hate speech that is broadly directed at certain groups—white men, say—is not allowed. In order to get around the censors, one must drill down a little further, qualifying one's hate with another layer of specificity.

One slide illustrates this point. In a quiz meant to train employees on which "subset" of people are protected and which are not when it comes to speech directed against them, Facebook provided three examples: 1. Female drivers 2. Black children and 3. White men. Which of them do you suppose it's not ok to hurl invective against?

The answer would be bleakly hilarious if it also wasn't so frustrating. There is, however, a logic to it, at least a sort of insane bureaucratic logic.

this image is not availablepinterest
Bear Grylls//Digital Spy

On a post in May, Facebook defined what they consider hate speech.

"While there is no universally accepted definition of hate speech, as a platform we define the term to mean direct and serious attacks on any protected category of people based on their race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability or disease."

this image is not availablepinterest
Bear Grylls//Digital Spy

As the internal slides show, any time one of those "protected categories" is combined with an "attack" the result is hate speech.

this image is not availablepinterest
Bear Grylls//Digital Spy

But not every post is so clear cut. Sometimes, as in the case of Trump's call for banning Muslim immigrants, the identity algebra at work, in the warped thinking here, boomerangs the speech back into acceptable.

this image is not availablepinterest
Bear Grylls//Digital Spy

A protected class + a protected class = a protected class. "White" + "man" = protected.

A protected class + a non-protected class, however, results in a group of people it's ok to target. "Irish teens," or "teen migrants" in the examples they give, because age is not a protected class.

Migrants are a "quasi protected class" by their reckoning, but calls for inclusion or segregation of such a group is ignored by their censors.

this image is not availablepinterest
Bear Grylls//Digital Spy

The result is a confounding labyrinth of identity specifics—a sort of cross-genre-pollination of people. The more qualifiers you apply to a group, the more acceptable it becomes to target them. So, Muslims, broadly speaking, cannot be attacked, but, as in the case of Republican Congressman Clay Higgins, who called for hunting and killing of all "radicalised Islamic" "heathen animals" on Facebook earlier this month, that passed muster, because he refined his hate speech down to a pinpointed target.

Got all that?

Allan attempted to explain further in his post. Sometimes, he wrote, "there isn't a clear consensus — because the words themselves are ambiguous, the intent behind them is unknown or the context around them is unclear. Language also continues to evolve, and a word that was not a slur yesterday may become one today."

Complicating matters are the varying standards and laws in each specific country in which Facebook operates. Naturally, it's in the company's best interest to stay on the good side of the governing parties in each one, which leads to another matter entirely: How does a global company define who is a protected class, who is not, who is a freedom fighter, and who is a terrorist, when there is no agreement in any of the countries in question?

It's a well known dilemma in the realm of computing, where for a computer to approach true artificial intelligence, it would have to be capable of recognising abstract thought. We're not quite there yet, but Facebook seems to be making the exact opposite mistake. Instead, with its team of censors, Facebook seems to be instructing humans to think like a computer, attributing 1s and 0s to various classes of people, and trusting them to do the math.

this image is not availablepinterest
Bear Grylls//Digital Spy

"Facebook's mission has always been to make the world more open and connected," the company said in a statement a few weeks ago. "We seek to provide a platform where people can share and surface content, messages and ideas freely, while still respecting the rights of others. When people can engage in meaningful conversations and exchanges with their friends, family and communities online, amazingly positive things can happen."

Whether or not you trust that Facebook is at least attempting to do the right thing in a self-admitted difficult situation will vary. But if history has taught us anything, it's that the standards are always different for what is permissible when it comes to the powerful and the powerless. The louder your voice, and the further its reach—a virulently racist world leader, to use one completely imaginary example—will always be given preference over the cry of the marginalised. If Mark Zuckerberg really intends his platform to empower the rest of of us, and not just the Donald Trumps of the world, he'd do well to correct the imbalance.

From: Esquire US