Is It Possible To Have Coherent Principles Around Free Speech Norms?
I.
One factor that must underlie people’s distrust of non-governmental free speech norms is that they’re so underspecified. The First Amendment is a comparatively simple, bright-line concept – the police can’t arrest you for saying the President sucks. Sure, we need a zillion Supreme Court cases to iron out the details, but it makes sense in principle. By contrast, social norms about free speech risk collapsing into the incoherent Doctrine Of The Preferred First Speaker, where it’s okay for me to say that the President sucks, but not okay for you to say that I suck for saying that. This is dumb, and I don’t know if free speech supporters have articulated a meaningful alternative. I want to sketch out some possibilities for what that sort of alternative would look like.
The philosophical question here is separating out the acts of having an opinion, signaling a propensity, and committing a speech act.
Having an opinion is the sort of thing free speech norms ought to exist to protect. The opinion ought to enter the marketplace of ideas, compete with other opinions on its merit, and either win or lose based on people’s considered rational judgment.
But this can’t be separated from signaling a propensity for action. Suppose Alice has the opinion “hand hygiene doesn’t matter”. The truly virtuous action is to show her (and concerned third parties) studies that prove that dangerous infections are transmissible by unwashed hands. But while you’re doing that, it’s fair to not want to eat at her restaurant. And it’s pro-social to tell other people not to eat at her restaurant either, and not to hire her as a nurse – and if she’s already a nurse, maybe to get her fired. Even though reasonable free speech norms demand that we fight bad ideas through counterargument rather than social punishment, in this case they should permit a campaign to get Alice fired.
One solution here might be to give people the burden of demonstrating that their controversial opinions won’t lead to dangerous actions. For example, if Alice is a nurse, she might say “I don’t believe hand hygiene matters, and I’m going to try to convince the hospital administration to remove their rule mandating handwashing – but until I succeed, I’ll follow the rules and wash my hands just like everyone else.” If I trusted Alice, this would allay my concerns, and I would go back to wanting to debate with her instead of wanting her fired. See also Be Nice, At Least Until You Can Coordinate Meanness.
Some signaling of propensities can’t be so easily fixed. If Carol thinks that “Hitler should have finished the job”, I feel like this tells me a lot about Carol besides just her moral ranking of various World-War-II-related alternate histories. If she was a schoolteacher, then even if she promised not to kill any Jews in class, or even to spread any anti-Semitic propaganda in class, it would be hard for me not to wonder what else was wrong with her, and whether she could really disguise every single repugnant aspect of her personality. On the other hand, if we try to get the school board to fire her, we’re implicitly endorsing the principle “Get someone fired if you know of a belief of theirs that suggests they’re an otherwise repugnant person” – and isn’t this the same principle that led people to campaign against atheist schoolteachers, pro-gay schoolteachers, communist schoolteachers, etc? See also Not Just A Mere Political Issue. I think I bite the bullet on this one and say that if the schoolteacher credibly promises not to be repugnant in any way in front of the kids, you let her keep teaching until she slips up.
And both having opinions and signaling propensities are hard to separate from commiting speech acts. The archetypal example is telling an assassin “I’ll give you $10,000 if you kill Bob” – a form of speech, but tantamount to murder. Repeated harassment – the kind where you scream insults at someone every time they leave the house – falls in the same category: the active ingredient isn’t the information being conveyed by what insults you choose, it’s that they face being screamed at and made to live in fear. And yeah, the archetypical example of this is starting a campaign to email someone’s embarassing secrets to their boss to get them fired.
We can’t just ban speech acts. Everything is a speech act. Me saying “Donald Trump is wrong on immigration” lowers Donald Trump’s status – that’s a speech act. Me saying “You’re wrong about free speech” might trigger you and make you feel awful until you write a 10,000 word essay responding to me – that’s a speech act too. Telling an offensive joke is definitely a speech act, but do we want to ban all jokes that anyone anywhere might be offended by? Let’s face it; a lot of speech is criticism, sometimes really harsh criticism, and the line between “criticism”, “insult”, and “harassment” is vague and debatable (see eg all of Twitter). Everyone has a different set of speech acts they consider beyond the pale, with no real way of sorting it out. So what speech acts do we permit as unavoidable parts of the process of social interaction, which ones do we punish, and how do we punish them?
II.
A sample problem: a while ago, I read an article which took a sensitive social problem, approached it with inexcusably terrible statistics that mis-estimated its incidence by seven orders of magnitude, and then used it to mock the people who suffered from it and tell them they were stupid. I complained about this, and the author was really confrontational to me and said things like that I “needed to see a psychiatrist”. I ended up writing a couple of really angry blog posts, which not only corrected the statistics but also prominently named the author, accused him of being a bad person, and recommended that nobody ever trust him or take him seriously again.
One view: although the author was wrong, we’re all wrong sometimes. I’ve been wrong before, probably in ways that other people considered inexcusable, and I would rather be politely corrected than excoriated in public, dragged through the mud, and accused of being a defective human being. By my article, I contributed to a world where we don’t just debate each other’s points, but launch personal attacks against people in the hopes that they are so ashamed and miserable that they never participate in the discussion again. I have committed crimes against Reason, and I should humbly apologize and try to do better next time.
A second view: the author was either deliberately deceitful or criminally stupid; either way he really was inexcusably bad. If I just quietly correct his statistical error, only a fraction of his readership will see my correction, and meanwhile he’ll go on to do it a second time, a third time, and so on forever. Although there are many good people who should be approached as equals in the marketplace of ideas, there are also defectors against that marketplace who deserve to be ruthlessly crushed, and I was doing a public service by crushing one of them.
And a third view: by being needlessly cruel in his article, the author had already forfeited the protection of “the marketplace of ideas”. Just as if someone tries to shoot you, you can shoot back without worrying so much about the moral principle of nonviolence, so it’s always proper to fight fire with fire. Although I wouldn’t be justified in smacking down someone who had merely failed egregiously, someone who fails egregiously while breaking good discussion norms is another matter.
The second and third views get kind of scary when universalized. The second amounts to “if you decide someone’s a really bad person, feel free to crush them.” No doubt some evangelicals honestly think that gay rights crusaders are bad people; does this justify personal attacks against them?
The third seems to demand a more specific trigger (violation of a norm), but since nobody agrees where the norms are, it’s more likely to just lead to cascades where everyone ends up at different levels of the punishing/meta-punishing/meta-meta-punishing ladder and everyone thinks everyone else started it.
(an example: Alice writes a blog post excoriating Bob’s opinion on tax reforming, calling him a “total idiot” who “should be laughed out of the room”. Bob feels so offended that he tries to turn everyone against Alice, pointing out every bad thing she’s ever done to anyone who will listen. Carol considers this a “sexist harassment campaign” and sends a dossier of all of Bob’s messages to his boss, trying to get him fired. Dan decides this proves Carol is anti-free speech, and tells the listeners of his radio show to “give Carol a piece of their mind”, leading to her getting hundreds of harassing and threatening email messages. Eric snitches on Dan to the police. How many of these people are in the wrong?)
But I can’t fully bite the bullet and accept the first view either; some people are so odious that an alarm needs to be spread. I’m not proud of my behavior in the specific situation mentioned, but I won’t completely give up the right to do something similar if the information arises. I’m going to try as hard as I can to err on the side of not doing that (I stick by my decision not to name the Reason columnist involved in the sandwich incident, although I guess everyone already knows) but sometimes the line will need to be crossed.
III.
I think the most important consideration is that it be crossed in a way that doesn’t create a giant negative-sum war-of-all-against-all. That is, Democrats try to get Republicans fired for the crime of supporting Republicanism, Republicans try to get Democrats fired for the crime of supporting Democratism, and the end result is a lot of people getting fired but the overall Republican/Democrat balance staying unchanged.
That suggests a heuristic very much like Be Nice, At Least Until You Can Coordinate Meanness again: don’t try to destroy people in order to enforce social norms that only exist in your head. If people violate a real social norm, that the majority of the community agrees upon, and that they should have known – that’s one thing. If you idiosyncratically believe something is wrong, or you’re part of a subculure that believes something is wrong even though there are opposite subcultures that don’t agree – then trying to enforce your idiosyncratic rules by punishing anyone who breaks them is a bad idea.
And one corollary of this is that it shouldn’t be arbitrary. Ten million people tell sexist jokes every day. If you pick one of them, apply maximum punishment to him, and let the other 9.99 million off scot-free, he’s going to think it’s unfair – and he’ll be right. This is directly linked to the fact that there isn’t actually that much of a social norm against telling sexist jokes. My guess is that almost everyone who posts child pornography on Twitter gets in trouble for it, and that’s because there really is a strong anti-child pornography norm.
(this is also how I feel about the war on drugs. One in a thousand marijuana users gets arrested, partly because there isn’t enough political will to punish all marijuana users, partly because nobody really thinks marijuana use is that wrong. But this ends out unfair to the arrested marijuana user, not just because he’s in jail for the same thing a thousand other people did without consequence, but because he probably wouldn’t have done it he’d really expected to be punished, and society was giving him every reason to think he wouldn’t be.)
This set of norms is self-correcting: if someone does something you don’t like, but there’s not a social norm against it, then your next step should be to create a social norm against it. If you can convince 51% of the community that it’s wrong, then the community can unite against it and you can punish it next time. If you can’t convince 51% of the community that it’s wrong, then you should try harder, not play vigilante and try to enforce your unpopular rules yourself.
If you absolutely can’t tolerate something, but you also can’t manage to convince your community that it’s wrong and should be punished, you should work on finding methods that isolate you from the problem, including building a better community somewhere else. I think some of this collapses into a kind of Archipelago solution. Whatever the global norms may be, there ought to be communities catering to people who want more restrictions than normal, and other communities catering to people who want fewer. These communities should have really explicit rules, so that everybody knows what they’re getting into. People should be free to self-select into and out of those communities, and those self-selections should be honored. Safe spaces, 4chan, and this blog are three very different kinds of intentional communities with unusual but locally-well-defined free speech norms, they’re all good for the people who use them, and as long as they keep to themselves I don’t think outsiders have any right to criticize their existence.
IV.
I don’t know if this position is coherent. My guess is there’s a lot of places it doesn’t match my intuition, and a lot of other places where it’s so fuzzy it could justify or condemn anything at all.
But I think trying to hammer out something like this is important. Free speech norms aren’t about free speech. They quickly bleed over into these really fundamental questions, like – what is a culture? What is it we’re trying to do when we get together and have a society? Are we allowed to want different things from a culture? If so, how do we balance everyone else’s demands? Do we just live in some kind of postmodern globalized atomized culture, or are cultures these things inexorably linked to specific value systems that we’ve got to keep moored to those systems at all costs? How much are we allowed to use shaming to punish people who don’t conform to our culture? How angry are we allowed to be when other people use shaming to punish people we like who don’t conform to theirs?
Trying to get a model of these things that doesn’t immediately contradict itself on everything is potentially a good first step to trying to get a model of these things that’s right and/or liveable.