Social Media without content moderation would ruin us and the Supreme Court should know it

Content Moderation
(Image credit: Shutterstock)

As I write this, I feel a bit like a broken record, reporting again how the United States Supreme Court could, with a ruling today (Monday or soon after), decide the fate of social media as we know it. Justices could choose to leave content moderation decisions in the hands of these companies or designate them all icons of free and unfettered speech. 

The two cases, one from Florida and one from Texas, both revolve around claims that Facebook, X (formerly Twitter), and other social media platforms are unfairly banning content primarily from right-wing users and sources. The claims mostly rise out of a flashpoint in US history: the January 6th riots and storming of the US Capitol. At the time, Facebook, Twitter, YouTube, and other social media companies deplatformed Former President Donald Trump out of fears that further posts from him would ignite more violence. In this same era, the companies also sought to weed out what they saw as dangerous misinformation regarding the COVID-19 pandemic and vaccinations. If the two cases are successful, they would radically alter social media companies' ability to moderate and ban content.

It's one of the biggest and most fraught US Supreme Court cases of the year, but perhaps the question for the justices boils down to what these platforms really are: publishers or utilities. Except, I don't think it's that simple.

Is it possible to be a utility and content curator? Probably not, but even a utility must ensure that the service it delivers and the contents of its delivery – electric, gas, water, communication, or television signals – are of high quality and safe for human consumption.

Social media platforms, thanks to a portion of the 1996 Communications Decency Act, Section 230, have enjoyed the protections and privileges of utilities and not suffered the liabilities endured by content curators and publishers like, say, The New York Times.

Moderation in a changing world

Naturally, those rules were crafted long before the age of social media and decades before we understood how the consumption of social media affects our surprisingly malleable human minds. Sure, we had AOL and bulletin boards in the 1980s and 1990s but nothing truly prepared us for Twitter (now X), Facebook, LinkedIn, TikTok, Instagram, YouTube, and other platforms that consume our screens and huge portions of our days (have you checked your Screen Time lately?).

Young minds are especially vulnerable to algorithmically massaged data fed to us like highly targeted firehouses of content and ideas. Social media has the power to shape our perception of the world or, at the very least reinforce it. In a way, social media has worked in opposition to publishing and media, which once sought to illuminate and rationalize the world through the discovery and sharing of facts.

When politicians, companies, global powers, and others realized that they could not just participate in social media but help shape the messages delivered on it, they took advantage of the platforms' agnostic stance. Twitter and YouTube didn't care what you posted. Content moderation rules were basic and obvious back then, usually mostly dealing with adult content as if that would be our biggest problem. No one foresaw the future and certainly no one foresaw January 6th.

This won't end well

If the Supreme Court rules in favor of Texas and Florida, social media will change but not in a good way. You can already see what it might be like on X where free speech at all costs proponent Elon Musk has almost destroyed the platform by, if not encouraging, leaving the gates open for trolling, hate speech, and significant misinformation (quite a bit of it coming direct from Musk himself).

SCOTUS could rob these platforms of their most basic moderation capabilities. It would be like telling the water plant that you can't purify our water without explaining to every customer why you're removing lead and other contaminants. Social media is in a constant battle with content and information polluters. If it can't automate some of the moderation, the platforms will be overrun and soon unusable.

Do I think these platforms should be able to, when asked, explain their decisions? Yes, but I guarantee that YouTube and others make millions of automated decisions every day. Most of them go unappreciated by us because we can't miss what we don't see and we should be thankful for most of the things we don't see.

Over the last several years, I've heard from many people on both sides of the political spectrum who believe they've been shadowbanned by social media companies. I take this as a sign that there's some balance. Social media is not looking to ban you specifically, they're just blocking objectionable, questionable, or even blatantly incorrect or dangerous content. They get it wrong – a lot – but not as much as they get right.

Censorship is bad when it happens, but social media without content moderation would be downright dangerous. The US Supreme Court must know this and understand that there is no such thing as perfect content moderation and a world without it is probably perfectly awful.

You might also like

Lance Ulanoff
Editor At Large

A 38-year industry veteran and award-winning journalist, Lance has covered technology since PCs were the size of suitcases and “on line” meant “waiting.” He’s a former Lifewire Editor-in-Chief, Mashable Editor-in-Chief, and, before that, Editor in Chief of PCMag.com and Senior Vice President of Content for Ziff Davis, Inc. He also wrote a popular, weekly tech column for Medium called The Upgrade.

Lance Ulanoff makes frequent appearances on national, international, and local news programs including Live with Kelly and Ryan, the Today Show, Good Morning America, CNBC, CNN, and the BBC.