Free Speech Trumps the President

Bruce Gustafson
5 min readJun 4, 2020

Forced speech is just censorship by another name.

Courtesy Wired.com

In the last week of May, 2020, as the US was going up in flames and a pandemic was sweeping the globe, the people at Twitter made a decision. They chose to exercise free speech — and suffer the consequences — because their moral compass could bend no further. Twitter censored the President of the United States.

Depending on your political philosophy, this was either a triumph for truth and decency or proof that the conspiracy was real and that the one true war was calling the faithful to arms. Viewed from across the Atlantic however, it told an entirely different story.

For some, it emphatically said Europe was right, and that right would soon be winning.

Two foundational concepts anchor the internet free speech debate: the universally applied practice called “content moderation”, and the government policy that website “users” and website “hosts” were independent of each other and that the behavior of one shouldn’t necessarily accrue to the other. In both cases, the principles mean that private hosting entities are able to restrict the speech of users in an exercise of their own rights to control what their platforms are used for.

Let the flame wars begin.

No one would argue, as Tim Wu analogizes, that a newspaper has the right to publish an editorial stating its particular views on an issue separate from the news, or to not publish an editorial on an important topic, as it sees fit. I cannot force the Times to publish my op-ed any more than I can force them not to publish theirs. Free speech is the right to say what you want (and suffer the consequences), but also the right to not say anything. Free speech does not obligate anyone to listen, to read, or even to provide a venue for the speaker. If you have something to say, it’s up to you to find a forum.

Twitter, as a private company, has the right to place limits on what users can post on its service. There is no obligation for them to be fair, neutral, or even rational in what they decide. If you don’t like their rules, go start a twitter of your own. They impose these rules through content moderation — their ability to decide what meets their service rules and what doesn’t. There are no laws or courts involved, only their editorial committee and the good-will of their customers. Fail at moderation and the market moves elsewhere — it’s just capitalism at work. Moderation is easy to understand, and (almost) no one blames anyone but the platform host for perceived slights or abused discretion. To be clear, the internet without moderation is a cesspool of abuse and criminality. History has shown this time and again. We can all agree, though we might not like it, that Twitter can — at its discretion — censor the content that users post on its service.

The various national laws which assign content responsibility between speakers and forum hosts are far less solid, however. While no one believes that hosting someone else’s content absolves the original author of responsibility for the post, most people believe that the host’s ability to censor what’s uploaded — or not to — means the host does have at least some liability for any bad stuff that goes online. While no one holds a landlord liable for the graffiti a vandal sprayed on the side of a building, the ability of a website to moderate content brings with it at least a little responsibility for what users say. The question is, how much?

The United States operates under the default rule that users bear responsibility for what they post, while hosts are responsible for taking down content they know is illegal. It’s trickier than that, but from a distance that’s the rule. These situations are often not as cut and dry as they may theoretically appear. In the EU the rule is similar, with considerable debate on just how hard hosts should be obligated to search out and classify content that is either illegal or “harmful.”. EU law does not protect free speech to the degree the US does, however, and in fact makes some speech illegal that would be protected in the US.

This is a problem.

Twitter (and most platforms for user speech) has a reach that transcends geopolitical boundaries. When the EU orders Twitter to police harmful content (as defined by the EU), it is using a private company as a tool of state censorship. This is, of course, perfectly legal in the EU (though deciding just what is and is not illegal speech feels like a judicial function, which perhaps the EU does not really mean to subcontract to a US firm). That same behavior would violate US law, which places far greater prohibitions on government censorship. What would happen if Twitter was, at the same time, both obligated to remove AND obligated to retain the same content by two different national legal systems? This lack of harmony has been a serious impediment for diverging standards in the US and EU. Say, for instance, a Trump tweet that clearly violates EU hate speech laws. Because they do.

The EU is delighted to see that Twitter censored Trump of its own volition. The thinking is that through the platform’s independent action it has established a content standard in keeping with the aspirations of EU regulation: a single, more restrictive standard for free speech. I believe they are wrong and that their celebration is premature.

First, Twitter came to this place of its own volition, not by force of law.

Second, the market remains open to the emergence of brother-in-law-of-twitter; a service that welcomes Trump’s brand of eye-poking.

Third, what Twitter exercised was its right to speak by not hosting on its service words which it finds offensive. Twitter added its own, unique, and personal editorial voice, thus moving the discourse forward. It did not simply leave a blank where a tweet might have been (though they have that right as well).

The threat of assigning liability for those hosting offensive speech, unless they host offensive speech of the government’s choosing, is a lesson in circular logic. The desire to build a globally-agreed-upon content pre-filter is fantasy, and flawed from inception. If websites are made completely responsible for user content, their lawyers will close the gates. We will devolve from platforms for the many, to many platforms, each for just one. From dialogue and sharing, to voices shouting into the void.

I’m not sure that’s an outcome we should welcome.

--

--

Bruce Gustafson

Over-educated tech veteran. Policy wonk. CEO @ DevelopersAlliance.org — advocating for the software developer community in DC and Brussels.