The Dilemma of Anti-Semitic Speech Online

IMPORTANT READS

GUEST WORDS--Robert Bowers, the alleged Pittsburgh synagogue killer, had an online life like many thousands of anti-Semitic Americans.

Among his social media accounts, he was most active on Gab, a right-wing Twitter knockoff with a hands-off approach to policing speech. The Times of Israel reported that among anti-Semitic conspiracy theories and slurs, Bowers had recently posted a picture of “a fiery oven like those used in Nazi concentration camps used to cremate Jews, writing the caption ‘Make Ovens 1488F Again,’” a white-supremacist reference.

Then he made one last post, saying, “I’m going in,” and allegedly went to kill 11 people at the Tree of Life synagogue in Pittsburgh.

Only then did his accounts come down, just like Cesar Sayoc’s, the mail-bomb suspect. This is how it goes now.

Both of these guys made nasty, violent, prejudiced posts. Yet, as reporter after reporter has noted, their online lives were — to the human eye at least — indistinguishable from the legions of other trolls who say despicable things. There is just no telling who will stay in the comments section and who will try to kill people in the real world.

In some corners of the internet, the tired hypothetical of free speech has been turned on its head: There isn’t one person yelling “fire” in a crowded theater, but a theater full of people yelling “Burn it down.” The pose of the alt-right is that its members are only kidding about hating black people or Jews. A Bowers can easily hide among all the people just “shitposting” or trying to “trigger the libs.”

All of which complicates the situation for the big internet companies. Over the past 10 years, free speech has undergone a radical change in practice. Now nearly all significant speech runs through a corporate platform, be it a large hosting provider, WordPress, Facebook, or Twitter. Speech may be free by law, but attention is part of an economy. Every heinous crime linked to an app or website tests the fragile new understanding that tech companies have of their relationship to speech.

Tech-company employees like to say things like “Do you want Mark Zuckerberg being the arbiter of what speech is allowed on Facebook?” as if that is not already the case, or that is not exactly what Facebook signed up to do when it attempted to “rewire the way people spread and consume information,” as Zuckerberg put it in his letter to shareholders in 2012.

During the past couple of years, big platforms like Facebook have come to understand that violent rhetoric is a danger to their business. Zuckerberg has vowed to “take down threats of physical harm,” specifically those related to white-supremacist violence.

In some areas, such as terrorist posts, the companies take automatic and proactive steps to keep these ideas from reaching an audience. But by and large, they’ve developed rules for judging content based on what users report to them.

On paper, these rules tend to look pretty good. They are written by smart people who routinely encounter the problem of regulating billions of people’s speech, and who have thought hard about it.

But seemingly anytime someone has a reason to look closely at the posts of individual users who turn violent, it is plain to see that all kinds of violent posts make it through the systems that Facebook, Twitter, and others have set up. Report anti-Semitism or rank racism or death threats or rape gifs sent to women, and disturbingly often, things that appear to be clear violations of a company’s policies will not be seen that way by content moderators. Mistakes are made — no one knows how many — and it’s easy to blame the operations of these companies.

But the problem goes deeper. Internet trolls have developed a politics native to these platforms that uses their fake democratic principles against them. “These fringe groups saw an opportunity in the gap between the platforms’ strained public dedication to discourse stewardship and their actual existence as profit-driven entities, free to do as they please,” John Herrman wrote in 2017.

Right-wing critics of the platforms find themselves hamstrung because, generally speaking, they don’t want mandates for companies. As The Daily Caller put it, “Of course, Twitter is a private company, free to do as it pleases, even if it serves to please some and displease others.” There is no public platform; every place where speech circulates depends on private companies. So, calls of censorship are toothless unless people leave the platform (which they don’t in significant numbers) or advertisers pull their money (which they wouldn’t over the desires of neo-Nazis or anti-Semites).

The main pressure point, then, is to get internet companies to conform to an absolutist free-speech position, which many of these critics claim is more in line with the American conception of the principle. This also aligns the trolls with the side of democracy over the autocratic platforms. They become the heroes fighting oppression.

For many years, this politics worked. It was not long ago that free-speech absolutism was the order of the day in Silicon Valley. In 2012, Twitter’s general manager in the U.K. described the company as “the free-speech wing of the free-speech party.”

But that was before anti-Semitic attacks spikedbefore the Charlottesville, Virginia, killing; before the kind of open racism that had lost purchase in American culture made its ugly resurgence.

Each new incident ratchets up the pressure on technology companies to rid themselves of their trolls. But the culture they’ve created will not prove easy to stamp out.

 

 (Alexis C. Madrigal is a writer, @TheAtlantic. Host of the Containers podcast on global trade and technology. Author of Powering the Dream. @WIRED@Fusion alum. Married to @sarahrich. This piece was posted most recently at Medium.com.)

-cw