Facebook Adds Blackface to List of Prohibited Content It Might Get Around to Enforcing

We may earn a commission from links on this page.
Image for article titled Facebook Adds Blackface to List of Prohibited Content It Might Get Around to Enforcing
Photo: Daniel Leal-Olivas (Getty Images)

A month after Facebook reached a breaking point with civil rights leaders over its failures to police its platform for racist and hateful content, the social media behemoth has announced some changes to hate speech policies. Going forward, Facebook will prohibit certain depictions of blackface and content suggesting that Jewish people run the world.

Facebook announced the new policies on Tuesday and, as with all things related to FB and moderation, it’s important to look at the exact wording that has been added to the granular list of banned items. The company says that it will now prohibit “Caricatures of black people in the form of blackface” and generalizations about “Jewish people running the world or controlling major institutions such as media networks, the economy or the government.”

Advertisement

Anyone freaking out that their Tropic Thunder memes are going to be taken away should keep in mind that Facebook always leaves plenty of wiggle room in its hate speech rules. For instance, moderators will need to determine that a post is a “direct attack on people based on what we call protected characteristics.” It has defined 11 protected characteristics and an attack is described as “violent or dehumanizing speech, harmful stereotypes, statements of inferiority, or calls for exclusion or segregation.” There are more contextual caveats given in which Facebook reserves the right to make a judgment call such as when a user shares “content containing someone else’s hate speech for the purpose of raising awareness or educating others.”

Advertisement

Pointing out the fact that Facebook is leaving room for nuance isn’t to say that it’s trying to pull the wool over people’s eyes, but it should give you some indication of just how strict the crackdown on new categories of hate speech will be in the long run. The social network has failed time and again to enforce its own policies in major circumstances that have had global social and political implications. In 2017, ProPublica conducted a review of 49 posts that appeared to run afoul of Facebook guidelines, and the company determined that it had made the wrong decision in 22 of the cases.

Advertisement

In a blog post on Tuesday, Facebook said that it’s had further problems with moderation since the global covid-19 pandemic has interrupted workspaces around the globe. It wrote in the statement:

With fewer content reviewers, we took action on fewer pieces of content on both Facebook and Instagram for suicide and self-injury, and child nudity and sexual exploitation on Instagram. Despite these decreases, we prioritized and took action on the most harmful content within these categories. Our focus remains on finding and removing this content while increasing reviewer capacity as quickly and as safely as possible.

Advertisement

While it’s always been true that there’s just too much content being produced by Facebook’s 2.7 billion users for it to realistically enforce its policies across the board, that’s always been more of an argument for why Facebook is simply too big to responsibly run its business. The argument that it just doesn’t have enough moderators kind of falls apart when we’re seeing record unemployment and $5.1 billion in quarterly profits.

The upside of Facebook putting a spotlight on Tuesday’s policy change is that racists will inevitably poke their heads up and start testing the system. The question is, will Facebook be watching?

Advertisement