We Need Laws Restricting Online Deplatforming

Shiva Bhaskar
12 min readApr 2, 2019
Photo credit: Mashable.com

Background

In the summer of 2016, Milo Yiannopoulos (commonly known as Milo), a then-Breitbart writer and right-leaning provocateur, was permanently banned from Twitter. At that time, Yiannopoulos was perhaps the most prominent political voice to be banned from a major Web platform.

Twitter’s actions against Milo are known as “deplatforming.” In the online context, deplatforming means barring a user (or group of users) from using an Internet domain, such as a social network, payment/funding hub, content distribution platform, or a web host or online security provider, due to the user’s views or conduct. Here, we focus on deplatforming by Facebook, Twitter and YouTube in the United States.

In 2018, several Facebook pages controlled by Richard Spencer, an alt right leader, were removed, as were accounts and groups affiliated with the Proud Boys, a male-only self described group of “Western chauvinists.” Facebook’s policies came under further scrutiny after the March 2019 massacre of Muslim worshipers at mosques in New Zealand, which was streamed live on Facebook, before being taken down. The mosque shootings seemingly prompted Facebook to ban white nationalism and separatism, which until recently was at least (partially) permitted.

Twitter continues to remove accounts of controversial figures. These range from writer and conspiracy theorist Paul Joseph Watson, to white nationalists like Paul Nehlen and Jared Taylor, to political activist and online personality Laura Loomer.

YouTube is no exception to the speech wars. Prager University, a conservative nonprofit, unsuccessfully sued Google (YouTube’s parent company), claiming that some of it’s videos were demonetized (ads were removed) unfairly. In 2018, Alex Jones, a conspiracy theorist who speculated that the Sandy Hook school shootings were a hoax, was booted off YouTube (as well as Twitter, Facebook, Apple and Spotify).

Where The Law Stands On Freedom Of Expression

The 1st Amendment provides that “Congress shall make no law….abridging the freedom of speech, or of the press….” There’s one key word we must focus on: Congress. This provision has traditionally been viewed in the context of government actions which restrict freedom of speech. If a governmental body bars the possession of pornography, prohibits the desecration of the American flag in public, or passes restrictions on political campaign contributions, the Supreme Court has held they’re in violation of the 1st Amendment.

None of these cases, however, involved the actions of a private actor, such as an employer, retail store, or a website. In fact, courts offer much greater leeway to private employers, as well as property owners, in restricting such speech. Individual states, however, can pass laws requiring private property owners (like shopping malls) to allow certain types of speech on premises.

A Crucial Supreme Court Case

This leads to an important question — would it be permissible for federal or state governments to pass laws which restrict deplatforming of individuals or groups by Facebook, Twitter or YouTube? The 1980 Supreme Court case of Pruneyard Shopping Center v. Robins offers a starting point for answering this question.

In that case, the Supreme Court held that a group of political activists, who were soliciting signatures for a political cause, in the common areas of a California shopping mall, had a right to do so, despite a mall policy which forbade “publicly expressive activity”, outside of commercial purposes. The Court looked to a provision in the California Constitution, protecting free speech rights on private property, as long as such speech is “reasonably exercised.”

The shopping mall owners argued that being required to allow this sort of speech amounted to a “taking” of their property, which is forbidden by the Fifth Amendment and Fourteenth Amendment to the Constitution. The owners also argued that they had a First Amendment right not to be forced to use the property as a forum for others (i.e. a “right to exclude”), and that they were being forced to communicate a particular message.

The Court upheld the rights of activists to solicit signatures at the mall. They noted that it was permissible for a state to create laws which are broader, in terms of protecting speech or other rights, than the US Constitution.

The Court further noted that states are allowed to impose reasonable restrictions on the use of a property, as long as such activities “do not amount to a taking without just compensation…” In response to the mall’s argument that being forced to allow activists to collect signatures amounted to a taking of their property, the Court found such activities were not so intrusive as to “unreasonably impair the value or use” of the property. In reaching this decision, the Court looked at the size of the mall, the fact that it was open to the public, and the activity being conducted (quiet solicitation of signatures for a political issue, with minimal interference to shoppers and businesses).

Lastly, the Court rejected the mall’s argument that it was forced to promote a message, by allowing activists to gather signatures, or approach mall patrons. The Court observed that the shopping mall was open for the public to visit, engage in commerce, and (within reason) express themselves.

The views of members of the public should thus not be conflated with those of the owner. The state was not ordering any particular message to be displayed at the mall — rather, activists simply conveyed a message of their choosing. The mall’s owners could easily disavow any endorsement of what was said. .

Applying Pruneyard Shopping Center To Deplatforming

At first glance, it might seem unclear how solicitation of signatures at a shopping mall, relates to barring users from an online platform. A closer look, however, reveals considerable similarities.

Each major Web platform exists for commercial purposes, and gains economic value as a result of user and/or customer engagement. If users stop liking, commenting on and sharing content to Facebook, engaging with and uploading videos to YouTube, or tweeting out their thoughts on Twitter, each would be commercially worthless.

For these reasons, each platform spend considerable effort and money on recruiting and engaging users. In that sense, each acted like Pruneyard Shopping Center, in holding itself open to the public for commercial purposes.

Web platforms might counter that unlike a shopping mall, they have a registration process — entering one’s name, email, location and other data. However, this is (deliberately) rather simple, and limited in scope.

Also, it is used in large part for commercial purposes, as when Facebook sends email reminders of friend’s birthdays, or Twitter emails you about recent tweets — efforts to push engagement with the platforms. Furthermore, in the case of YouTube, Twitter, and even Facebook, it is also possible to access all or some content, without being a member — much like entering a shopping mall.

Where platforms have a much stronger claim than the plaintiff shopping center in Pruneyard, is that being forced to accept certain types of speech, amounts to a taking which would “unreasonably impair the value or use of their property.” If a Twitter user with a large following tweets insults about and directly towards another user (inspiring his followers to behave in a similar manner), or solicits funds to “take out” a political adversary, it can have a chilling effect on other people’s willingness to engage on Twitter. When trolls take over comment sections of certain YouTube videos, those who wish to thoughtfully engage might be deterred. The same is true of trolls who invade a private Facebook group for survivors of sexual assault, and threaten to expose confidential information.

Eventually, fewer people will engage with platforms like YouTube, Facebook, Twitter and others. The commercial value of these platforms (in terms of value to advertisers) is reduced. In this sense, passing blanket laws which forbids all deplatforming of users, could very well “unreasonably impair the value or use of their property….”, as articulated in Pruneyard.

The Supreme Court in Pruneyard also noted that the California Supreme Court (which heard the case earlier, since it involved state law), held that the shopping center could implement “time, place, and manner regulations that minimize any interference with it’s commercial functions.” Some amount of restrictions on activities which disrupt the enjoyment of others on a platform, clearly fall into an analogous category.

However, just as in Pruneyard, it would be rather difficult for a platform to credibly argue that it is being forced to promote a particular message. It is understood that each platform serves users with a wide range of opinions. Platforms can explicitly state that they don’t endorse any particular viewpoint — many have already chosen to do so in their user Terms of Service.

Where does this leave us? A state (let’s say Texas) could pass a law which forbids platforms operating in the state, from banning or refusing service to users, on the basis of content or ideology.

This law would carve out exceptions (that is, permit banning) of users who share content which violates copyright and trademark laws. It would allow bans against those who harass or call for violence against others, share displays of violence on the platform, or engage in conduct which they should reasonably anticipate will encourage harassment by other users.

It would also allow platforms to take action against deliberate efforts to hijack comment threads, or otherwise directly disrupt other people’s enjoyment of a platform. Sharing unpopular or hateful views would not count as disruptive of another user’s enjoyment, as long as it is not directly targeted towards that user.

Platforms would be required to craft clear policies and procedures, to decide whether a user or group of users violated the law. Ideally, users are offered at least one warning, and guidance on permissible and forbidden conduct, before being banned. Disagreements over an individual or group being banned could be resolved through an internal appeals process, articulated by each Web platform.

If that fails to bring about a resolution, the judicial system is another avenue — such a law would create a right to sue. This approach requires a great degree of human judgment, and some amount of experimentation, to balance the rights of various parties.

Under such a framework, Jared Taylor would still be on Twitter, despite his advocacy for white nationalist ideals. A Facebook group which Richard Spencer created, could remain active on the platform, and there’s no reason Prager University should have been demonetized on YouTube. Facebook’s blanket ban on white supremacist and nationalist individuals and content, would be impermissible under this law. One’s views would not serve as lawful grounds for exclusion.

On the other hand, those who engage in deliberate harassment of others, or conduct which might reasonably be anticipated to lead to deliberate harassment of others, as with Milo, and likely Alex Jones, could lawfully be banned. Both men enjoyed large, committed followings on Twitter and elsewhere, who aggressively attacked anyone who incurred Milo or Jones’ wrath. They cultivated this audience, and thus bear some degree of responsibility for their actions.

Facebook users who express support for acts of violence, or repeatedly share content which demonstrates or incites violence, could also be banned. Trolls who hijack comment threads, as during a YouTube livestream of a Congressional hearing, and users who personally demean others in comment threads (mere disagreement does not count, but insults based on race or gender would), can certainly be barred — again, after being offered a warning, and with a well-articulated appeals process.

Some free speech advocates might see these restrictions as overly broad, and not offering sufficient protections for controversial speech. We should note that freedom of speech, from a Constitutional perspective, has never been absolute. Time, place and manner restrictions have long been upheld, while direct incitement to violence can be restricted.

Despite the widespread reach and commercial success of platforms, they still are private businesses, and must be given some leeway in regulating user behavior. It is a balance. Otherwise, they are in effect being asked to privilege the rights of a few users, over everyone else, which can be highly damaging economically. We cannot treat Twitter, Facebook, or YouTube the same way as a public park, where nearly all speech might be permitted.

Since Texas is the second largest state by population, few platforms will willingly cease operations in such a large, critical market. Other states might choose to follow the lead of Texas. Over time, platforms will adopt to these new rules, perhaps nationally.

Online speech will be governed by clearer rules. What is and is not permitted will always require human judgment. There are likely to be extensive internal appeals, and outside litigation, to figure out what is and is not permissible. It will be a contentious process, but ultimately, deplatforming will be far less arbitrary than it is today.

At the same time, a range of content which most people of conscience might find toxic, including aggressively racist and sexist materials, will be freely available across the Web. Purveyors of such materials will continue to build large followings, and spread discord and disinformation.

This leads us to our final question — is it advisable to pass this sort of law? What are the policy implications?

Restricting Online Deplatforming Is Good Public Policy

There are several powerful arguments in favor of allowing online platforms to ban users at will. First, deplatforming seems to work effectively, in reducing the online reach of banned individuals and ideas. In the long run, the overall online attention garnered by individuals like Alex Jones & Milo Yiannopoulos, is greatly reduced by deplatforming. Reddit’s ban on highly controversial subreddits led to overall reduced harassment on the platform.

If one believes that the speech of certain individuals is destructive for society as a whole, then deplatforming is a smart move. In an era where much of the population uses Facebook, YouTube, Twitter and other platforms as a major source of news and information, deplatforming can stem the spread of speech which is factually false, or socially disruptive.

Deplatforming can also help facilitate free speech, by reducing harassment on platforms. In 2014, Fark, a popular link aggregation website, banned misogynist conduct on the site’s comment thread, in response to continuous harassment and threats to women. Jezebel made a similar move, in response to the constant posting of photos suggesting sexual assault, in comments threads. Whitney Phillips, an academic who studies online trolling, argues that such moves help encourage “more speech, not less” which helps “give voice to those who otherwise would be shouted down, drowned out, or scared off.”

Phillips makes an important argument. Threats, harassment, and deliberately disruptive behavior can drive users off a platform. No one thinks “I’d like someone to threaten my life, call me foul names and insult my gender, race, and religion. Let me comment on a website where that frequently happens.”

As discussed earlier, any law restricting deplatforming should differentiate between posting content that many find offensive (such as videos or blog posts with misogynist or racist content) vs. directly attacking another user, either in the comments section or through coordinated trolling attacks. The former would be protected as free speech — but web platforms are free to ban the latter.

This still leaves us with a fundamental question: Is requiring platforms to host deeply offensive but non-harassing content, a wise policy move? Yes.

For as long as humans have walked this planet, controversial ideas have found their way to wider audiences. Deplatforming might slow the spread of some ideas online, but it will not extinguish them from this world. As we see throughout history, ideas ranging from all major religions, great scientific advancements and the Enlightenment, to the rise of communism and religious fundamentalism, have spread powerfully for thousands of years. This was without the Internet, and in the face of ferocious opposition.

Quite often, the aura of an idea being forbidden and condemned by society, can make it more attractive, allowing it to spread further, and with greater intensity. Also, a feeling of solidarity and persecution takes root amongst those who hold a verboten viewpoint, which strengthens their determination to propagate such beliefs. The minority rule, which Nassim Taleb explains, and science supports, suggests that ideas held by a small, highly committed minority can ultimately end up being accepted by society as a whole. In this sense, deplatforming might have the opposite of it’s intended effect, by helping “bad” ideas more popular.

Uncontrolled deplatforming is also detrimental to the idea of a free society, and allows excessive concentration of power. YouTube, Facebook and Twitter are private entities, owned by shareholders, and must enjoy some control over user behavior. Yet, the tremendous reach these platforms have achieved, as a source of news, information and social engagement, compels us to think about these companies somewhat outside of the public-private dichotomy.

Each platform has become powerful enough to determine which ideas are heard, and which ones are (at least temporarily) silenced. We should assiduously avoid allowing any entity in our society, public or private, from exercising such powerful influence.

Rather than suppressing disfavored ideas, we should allow them to be openly advocated for, and assessed on their merits (or lack thereof). Those who oppose these ideas must push back, and advance cogent arguments for why certain beliefs are wrong. They should use every avenue available, to illuminate the wrongness of what they oppose.

This can be scary. After all, both the best and worst moments in human history flow from ideas, with bad ideas carrying deadly consequences, sometimes for generations on end. This is not some theoretical exercise, conducted on a campus in Cambridge or Palo Alto. Rather, it is a battle, in real time, with the fate of American society at stake. However, this is the only way.

Almost three years after Milo was banned from Twitter, the online speech wars remain as heated as ever. Ultimately, the ideal of a completely free, open, “anything goes” platform isn’t likely to work. Perhaps it was always a pipe dream.

However, arbitrary deplatforming, without well-defined criteria, an internal appeals process, and legal recourse, is also unacceptable. We have to protect the rights of people to say unpopular things, while ensuring that such rights don’t silence others. The time for deplatforming laws is now.

--

--

Shiva Bhaskar

Enjoy reading and writing about technology, law, business, politics and more. An attorney by training, I’m a native of Los Angeles, and a former New Yorker.