What Congress intended with Section 230 has nothing to do with what the First Amendment protects, and courts should stop implying otherwise.
In August, the U.S. Court of Appeals for the Third Circuit decided a significant Section 230 case, Anderson v. TikTok, ruling that Section 230 does not shield platforms from harm caused by their own algorithmic recommendation of third-party content. The court acknowledged (see footnote 13) that this creates a major circuit split, and it may prompt the Supreme Court to reconsider the issue (hopefully resolving it this time rather than punting it, as it did last year in Gonzalez v. Google).
Although I agree with the outcome, I strongly disagree with the court’s reasoning, which tries to view Section 230 through the lens of the First Amendment and impose a fair-play requirement on the protections platforms can claim under both Section 230 and the First Amendment. Anderson reflects a concerning trend among courts, including the Supreme Court, to mix up Section 230 and the First Amendment, an error that will only continue to distort Congress’s intent.
Anderson, like many Section 230 cases, involves a tragic story about minors. TikTok allegedly promoted a video encouraging viewers to engage in acts of self-asphyxiation called the “Blackout Challenge” to 10-year-old Nylah Anderson, who participated and died by accidentally hanging herself. Anderson’s family sued TikTok, but the district court dismissed the lawsuit based on Section 230, arguing (in a preview of precisely the issue that would confront the Supreme Court in Gonzalez) that algorithms were not “content” and that any attempt to hold a platform accountable for its recommendations was actually an attempt to hold it accountable for the third-party content itself.
The circuit court reversed, relying almost entirely on the Supreme Court’s recent opinion in Moody v. NetChoice, in which social media platforms challenged state laws that limited their ability to remove and otherwise moderate user content. The Third Circuit described Moody as holding that “a platform’s algorithm that reflects ‘editorial judgments’ about ‘compiling the third-party speech it wants in the way it wants’ is the platform’s own ‘expressive product’ and is therefore protected by the First Amendment.” “Given the Supreme Court’s observations that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others’ content via their expressive algorithms,” the Third Circuit concluded, “it follows that doing so amounts to first-party speech under § 230, too.” TikTok has since petitioned the Third Circuit to rehear the case en banc.
I am sympathetic to the argument that Section 230 should not shield platforms for recommending, rather than just hosting, harmful third-party content. But the Third Circuit’s reasoning is badly flawed. Putting aside the question of whether Moody actually held anything about the substance of the First Amendment, what the First Amendment protects simply has no bearing on the proper interpretation of Section 230. The former is a matter of constitutional law, while the latter is a question about statutory interpretation—specifically, what did Congress think it was doing when, in 1996, it enacted Section 230 as part of the broader Communications Decency Act?
Because the Third Circuit’s analysis is so conclusory, it’s hard to know exactly why it believes the First Amendment question is relevant to interpreting Section 230. But its citation to a recent opinion by Justice Clarence Thomas is suggestive. In Doe v. Snap, the Supreme Court rejected a petition for certiorari from a case in the U.S. Court of Appeals for the Fifth Circuit granting Section 230 immunity for a claim that Snapchat should be held liable for designing its platform in such a way that allegedly led to the sexual abuse and drug overdose death of a minor. Thomas dissented, arguing that the platforms’ positions on the First Amendment and Section 230 were at odds: “In the platforms’ world, they are fully responsible for their websites when it results in constitutional protections, but the moment that responsibility could lead to liability, they can disclaim any obligations and enjoy greater protections from suit than nearly any other industry.” Or, as New York Times columnist David French observed in a piece lauding the Anderson opinion, “with legal rights come legal responsibilities.”
The problem with this anti-hypocrisy argument is that the law does not prohibit hypocrisy. Section 230 cases are (or ought to be) about determining Congress’s intent, not reconciling the immunity that Congress granted in 1996 with what the First Amendment limits the government to today in terms of regulating platforms. It is perfectly plausible that Congress intended to provide platforms with statutory immunity that exceeds what a strict First Amendment analysis would prescribe. Indeed, such a choice wouldn’t even necessarily be hypocritical; it could just be a policy choice that strongly favored giving platforms control over online speech. Of course, that’s not to say that this was in fact the policy choice that Congress made in enacting Section 230, or that the current broad interpretation of Section 230, favored by lower courts, is correct. But the reason has nothing to do with the First Amendment. (In this regard, Judge Paul Matey’s separate opinion is better than the majority opinion, as it focuses on the legislative history of Section 230 and avoids tying the rejection of Section 230 immunity to the First Amendment.)
In an ironic twist, decisions like Anderson (and Justice Thomas’s reasoning in his Snap dissent) make the same mistake as do advocates of an overly broad interpretation of Section 230. In Zeran v. AOL, the influential case that interpreted Section 230 shortly after its enactment and established the precedent for subsequent opinions, the U.S. Court of Appeals for the Fourth Circuit broadly interpreted Section 230—overbroadly, as I have argued—because of the “threat that tort-based lawsuits pose to freedom of speech in the new and burgeoning Internet medium.” In other words, Zeran interpreted Section 230 as trying to extend one of the First Amendment’s goals—free expression—through a statutory immunity for platforms, despite clear evidence that Congress had no such overall intention.
By tying Section 230 to the First Amendment, the Anderson decision commits the same conceptual error as Zeran, but in the opposite direction. It acts as if Section 230 is meant to strictly reflect a different aspect of the First Amendment: demarcating what should count as the platforms’ speech in First Amendment cases—that is, when a platform resists the government’s attempt to limit its own speech.
In both cases, core questions regarding Section 230 are understood by reference to First Amendment concepts, even though there’s no indication that this was Congress’s intent. (Indeed, the confusion goes even deeper, with the Supreme Court sometimes also dragging Section 230 into debates about the First Amendment, as if a statute could provide useful gloss on a constitutional provision.)
Section 230 is a complex, confusingly worded statute with enormous policy stakes. That makes the search for its meaning—that is, an accurate description of Congress’s intent—difficult enough. Bringing the First Amendment into the mix simply muddles the picture, and courts should stop doing so.
– Alan Z. Rozenshtein is an Associate Professor of Law at the University of Minnesota Law School, a senior editor at Lawfare, and a term member of the Council on Foreign Relations. Previously, he served as an Attorney Advisor with the Office of Law and Policy in the National Security Division of the U.S. Department of Justice and a Special Assistant United States Attorney in the U.S. Attorney’s Office for the District of Maryland. Published courtesy of Lawfare.