COUNTERPOINT: Update Section 230 for the age of algorithmic distribution
Published in Op Eds
It’s the 30th anniversary of Section 230 of the Communications Decency Act, a law that says that communications platforms aren’t legally responsible for what users post. That means this law is older than all of Gen Z (not to mention Gen Alpha) and is still dictating the distribution of content despite the breakthrough technological developments since then.
The internet as we know it today is drastically different from the dial-up AOL chatrooms and message forums run by hobbyists 20 years ago.
We live inside industrial-scale recommendation engines that decide what billions of people see, in what order and how often. The law still treats platforms like passive hosts of user speech, even though the modern product is active distribution: ranking, boosting, autoplaying and nudging users into the next piece of content with machine learning systems optimized for engagement so they can sell more ads.
The gap between what platforms do and what the law assumes they are is the real Section 230 problem. This isn’t about free speech anymore, but about power and incentives.
However well-intentioned or even helpful Section 230 has been for the last few decades, the law now allows platforms to escape the consequences that their evolution, design and, frankly, business model have created for users across the globe.
Section 230’s framework made sense in an era where the main choice a service made was whether to leave something up or take it down, but platform distribution has since shifted from “here’s a page of posts” to “here’s an infinite feed the machine assembled for you to keep you scrolling.”
That shift matters because distribution is where the harm and the leverage live. The defining feature of today’s internet is that users don’t “browse” the way they did even five years ago. They are served. The feed is the product. The algorithm is the editor.
When YouTube says its personalized recommendation system accounts for more than 70 percent of users’ time spent watching videos, it shows the system has become the platform’s core distribution channel. When Meta explains how it ranks Feed content using machine learning, that’s a set of engineered choices about what gets surfaced, what gets buried, and what gets repeated until it sticks. TikTok’s help documentation describes recommendations as a personalized system driven by user interactions. None of this was even imaginable back when Section 230 was written.
So, here’s my take on how you maintain free speech while holding companies accountable: keep Section 230 for passive hosting, then stop handing blanket immunity to algorithmic amplification at scale. Hosting is letting someone speak. Amplification is deciding who hears it, how often, and in what emotional sequence. Currently, the law treats those as the same thing. They aren’t.
Section 230 helped build the internet. But it was built for a version of the internet that no longer exists. My generation didn’t grow up with MySpace. We grew up inside algorithmic feeds engineered to maximize attention, emotion and time spent.
When distribution is automated, optimized and monetized at a global scale, immunity cannot remain automatic. Updating Section 230 is not about punishing speech or dismantling the open web. It’s about aligning responsibility with modern power. Laws written for (essentially) bulletin boards should not serve as a liability shield for systems that shape public discourse in real time.
If we’re serious about innovation, competition and free expression, then the legal framework governing the digital world should reflect the digital world we actually live in — not the one that existed before many of us were born.
_____
ABOUT THE WRITER
Annie Moore is a founder and managing partner of Imperio Chaos, a digital-first public affairs and global strategic advisory firm. She wrote this for InsideSources.com.
_____
©2026 Tribune Content Agency, LLC





















































Comments