GUEST WORDS--There is a Facebook group dedicated to promoting the infamous blood libel—the story that Jews, now or in the past, have ritually murdered Christian children. It's a myth that has been used to justify violent persecution against Jews for centuries.
Since Facebook bans hate speech based on religious affiliation, every couple of months I like to report the blood libel page for violating Facebook's community standards. In fact, I and many others have been reporting it for years, as the group slowly grew from around 500 followers to over 1,200. Every time, within hours or days, Facebook responds: The page can stay.
Our new masters of media are mostly failing the challenge of hate in the 21st century. Rather than making the world more connected and elevating the voices of the oppressed, their platforms have become vehicles by which fringe groups propagate their offensive creeds, unchecked by algorithm or even by common sense. Facebook, Google, Twitter, and the other companies that hold the Internet together have all failed to take or sustain basic steps to ensure that people dedicated to spreading hate don't get a foothold on these powerful platforms. Instead, hate seeps through the net, encouraging like-minded miscreants and intensifying their commitment to bigotry and violence while making the Web toxic for marginalized citizens.
There are two different kinds of problems here. One, social media platforms and Internet companies fail to respond consistently to the most explicit kinds of hate. Two, even when taking mild steps against the worst perpetrators, these companies tend to give a pass to anyone who cloaks their bigotry in mild euphemisms. Here are some examples of the phenomenon.
I recently spent days responding to Twitter accounts proclaiming themselves "hardcore kike killers" (a neo-Nazi reference). Some of these accounts got taken down eventually, but only after initial statements by Twitter safety that they weren't in violation of the rules (and following lots of harassment in my Twitter feed). Other accounts remain, including explicitly Nazi-affiliated accounts with little but racist and anti-semitic invective filling their feeds. On top of this, Twitter tends to act powerless whenever anti-semites lightly code or soften their bigotry, using such phrases as "hook-nosed liar" or "globalist." But anyone with common sense can identify the anti-semitism, even on occasions when the word "kike" is missing.
Web-hosting platforms have been similarly complicit. After the murder of Heather Heyer in Charlottesville, the neo-Nazi website Stormfront lost its Web host for a few weeks and went offline. Once the furor died down, however, the Web-hosting company Tucows put the site back online. Don Black, the founder of Stormfront, runs martinlutherking.org, a site that defames King in myriad ways. It has the educational trappings of a cheerful, student-friendly site, but its contents are focused on sexual slander. As media scholar Jessie Daniels points out, during the hubbub over the white supremacists in Charlottesville, the effort to hold Black responsible for sparking racist violence never included his racist website about King.
It was almost exactly a year ago that Googling "did the Holocaust happen" yielded search results that resoundingly answered, "no." Widespread outrage prompted Google to adjust those search settings. When you search for Martin Luther King's name, on the other hand, Black's hate site about MLK still appears as the seventh result—well on the front page.
And then there's blood libel on Facebook. I'm Jewish and a medieval historian. The blood libel is a medieval myth that still permeates anti-Jewishness today, so I take the presence of this site very personally. Moreover, it has endured on Facebook despite widespread pressure on social media, and from well-connected groups like the Anti-Defamation League. It should be obvious that this Facebook group violates any useful community standards; if, indeed, the goal of such standards is to foster community. It's run by someone with the same alias as a well-known white supremacist, and it links to 19th-century newspapers accusing Jews of blood libel, a 1980s Oprah interview in which the guest accused Jews of ritual murder, YouTube documentaries on the subject, and (just a few weeks ago) new claims by Russian clerics connected to Vladimir Putin that Jews ritually murdered Tsar Nicholas II and his family in 1913. (The Romanovs were actually murdered by their Bolshevik captors.) On Facebook, after this latest false claim, the site administrator writes, "The truth is coming out...."
So how does such a reprehensible page still have a home on Facebook? The secret seems to be a vague nod toward the impossibility of disproving the blood libel. The site owner introduces his page by writing, "For most of history, belief in Jewish ritual murder was acceptable and widely accepted. ... But since WW2, with the rise of Jewish ownership of the mass media, has come the politically correct 'Doctrine of the Never-Guilty Jews.' Every accusation of Jewish ritual murder, no matter how well proved it might have been in its time, has become a 'blood libel' in today's media, a phrase that explicitly frames each case as a malicious falsehood, without an examination of the facts. Probably, not every accusation is true. But it is also unlikely that all of them are false."
I've queried Facebook spokespeople many times about the site and always receive the official response that, since the blood libel page doesn't violate community standards, they are powerless to act. A Facebook employee, speaking to me without permission from the the company, told me that there's frustration in the ranks over it, but that the site is successfully gaming the community standards by inserting just enough doubt into its claims—and by keeping its claims historical, rather than contemporary. Accusing current Jews of murdering babies would get you banned, but deploying a conspiratorial history is just fine.
The blood libel page is just one of the many awful groups that have taken advantage of these new media platforms to connect with each other and spread hate. It's true that there are lots of gray areas when it comes to speech. Sometimes, though, there's a bright line between speech that is merely controversial, and hate speech that has been used for centuries to promote violence against marginalized people.
Facebook, Twitter, Google, and the like can't stop people from being evil; what they can do is build a system that can't be so easily gamed by blood libelers, holocaust deniers, and people dedicated to propagating hate against vulnerable people of all sorts. When someone says they're a would-be "kike killer," believe them. Then get them off your network.
(Guest columnist David M. Perry is a former professor of history, contributing writer at Pacific Standard, and freelance journalist focused on disability, parenting, history, and education. This perspective was posted first at Pacific Standard.)