The justice claims the Digital Services Act hasn’t caused disaster, but public policy requires more subtle assessments of impact.
“The sky has not fallen,” said Justice Samuel Alito in June’s Moody v. NetChoice concurring opinion, describing the impact of the European Union’s Digital Services Act (DSA). But is he right?
Policy debates often focus on predictions of impact: If a proposed law is enacted, what will the effect be? As Justice Alito’s comment demonstrates, legal actors frequently frame these predictions in absolutes: The sky will, or will not, fall. Short of disaster, he suggests, a law is perfectly adequate. But judges are not the only ones who make claims in these terms. Companies often argue that a catastrophe is inevitable if such a law is enacted, slowing innovation to a trickle.
Catastrophizing rhetoric may be appealing in judicial opinions and policy debates, but it makes it harder to enact evidence-based tech policy. Instead, policymakers need detailed evaluations of more subtle metrics to understand the trade-offs inherent in any reform.
How to Make Sense of “the Sky Is Falling”
Justice Alito’s statement in his concurring opinion took issue with NetChoice’s argument that it can’t provide details on its members’ content moderation practices without serious risks. The Moody decision centered on states’ ability to enact laws that restrict the content moderation decisions of tech platforms. Alito’s statement rebutted NetChoice’s argument that disclosing the details of its members’ content moderation practices would “embolden ‘malicious actors’ and divulge ‘proprietary and closely held’ information.” Platforms already make such disclosures in Europe due to the DSA’s requirements, but, he said, “the sky has not fallen.”
Companies also contribute to the “sky is falling” rhetoric by catastrophizing in their own communications. When companies and trade associations advocate for or against new laws, they frequently frame the impacts in hyperbolic terms: A new bill will “break” beloved tech products; new regulation will hobble American companies’ ability to compete with their Chinese competitors. When parties on one side of an issue describe potential costs with such extremity, they lay the groundwork for a similar type of rhetoric on both ends of the spectrum.
To try to make sense of Justice Alito’s statement, a starting point might be to identify what the sky means in the context of the DSA. There are any number of potential ramifications from disclosures required by the DSA, starting with the “malicious” activity NetChoice mentions. That activity might include cybersecurity incidents, spam and fraud, grooming of minors, and an increase in disinformation and other harmful content. Additionally, if the DSA’s disclosure obligations are more burdensome for smaller companies than richer, larger ones, it could lead to an increase in tech sector concentration. And it might reduce innovation in disclosure regimes, since companies would be incentivized to focus simply on compliance with the terms of the DSA as they stand today.
Once the sky comes into focus, the next question is what it means to fall. One argument might be that small percentage changes in the prevalence of harm should be considered minor, since they are small as a relative matter.
But with tech products, the sky might fall even with small relative changes, since many tech platforms operate at an immense scale. One estimate suggests that Facebook users share more than 4.5 billion items each day, including 350 million photos and 1 billion stories. At that scale, even a 5 percent increase in cybersecurity incidents equates to thousands of such incidents, representing a meaningful impact on online safety.
In fact, lawmakers and company representatives often conduct debates in precisely these terms. Companies argue that they have policies designed to provide safe experiences for most users in most cases, and thus that the number of pieces of harmful content and problematic experiences is small as a percentage of the overall online experience. In contrast, lawmakers point to individual problematic incidents as evidence of systemic corporate failures, no matter how small a company’s error rate might be.
One recent point of comparison is the decline in users and advertisers on X (formerly Twitter). In the wake of Elon Musk’s acquisition of the company, the number of monthly users declined by 15 percent, and advertising revenue dropped 54 percent. By most accounts, the impact on the company has been devastating, and Musk filed lawsuits to try to claw back some of the losses. But the company remains operational, with millions of users and billions of dollars in annual revenue. In Justice Alito’s view, presumably, the sky has not fallen on X.
His comment in Moody suggests that as long as tech products remain usable and NetChoice’s members remain operational, the impact of a law should not warrant concern. But this threshold means that laws could affect safety, product functionality, and well-being for millions of users without ever being considered sufficiently catastrophic to tilt the judicial needle.
Alternative Impact Assessments and Challenges Measuring Them
Catastrophizing distracts from a more nuanced assessment of impact. In the tech sector, success and failure occur at the margins; slight changes in a product can have small but meaningful effects on innovation, product quality, and user satisfaction. Take short-form video, where Meta, Snap, and YouTube were slow to develop compelling products and were outperformed by TikTok—they are still trying to recover lost ground.
The impact of new tech public policy should be evaluated similarly. A new law might be measured by its effect on competition, including raising or lowering barriers to entry or leading to a decrease in venture capital funding for startups. It might change the volume of problematic content, measured by changes in the number of user reports or the number of government requests for user data or content removal. It might alter product quality, evaluated by user sentiment about product safety, privacy features, and ease of use. Subtle impacts matter, even when the sky doesn’t fall.
Evaluation should align with the goals policymakers set for the laws they pass. For instance, the European Commission references the role the DSA can play in “setting clear and proportionate rules” that “plac[e] citizens at the center.” Accordingly, one metric for evaluating the DSA might be how it affects user understanding of platform policies and enforcement.
But companies can make it harder to develop these more nuanced assessments of impact, since they often do not publicize the type of granular data that could shed light on the effect of a new law. While they may have valid rationales for limiting who can access their data—they are not shielded from privacy and consumer protection law when they provide information to third parties, such as researchers—a more refined discussion of impact cannot occur without more detailed company data.
If a government wants to provide an assessment of a law’s effect on company security, for instance, they will struggle in the absence of company reporting on cybersecurity incidents. Similarly, academics seeking to study whether a new law led to an increase or decrease in online harassment or misinformation face roadblocks in trying to gather public data at scale and from companies themselves.
Governments also miss opportunities to track the impact of new laws. Even if there were clear agreement on what constitutes the sky (the data points to measure) and whether it has fallen (the level of impact that qualifies as a material change), no formal mechanism exists to track the effect of tech reforms after they are passed, leaving lawmakers and the public with little sense of what their costs and benefits are. Tellingly, in his concurrence, Alito offers no citation for his conclusion.
Despite the DSA’s emphasis on transparency, the European Commission has not yet provided meaningful detail on its impact. Documentation published by the commission focuses primarily on the steps the platforms have taken to comply with the law’s terms, rather than the effect of those compliance measures on users and tech markets. The commission’s assessment mentions that YouTube and TikTok eliminated the ability to target advertising to minors, for instance, but does not provide data on how those limits affect child safety, advertising value or costs, consumer sentiment, or competitiveness. Reaching conclusions on any of those topics would be premature at this early stage of implementation, and the commission notes that its “work as enforcer has just begun.” Even so, to enable any outside observer to assess the impact of the DSA on users, products, and the market, the commission must provide more detail on other meaningful metrics.
Uncertainty about the effect of new tech law extends far beyond the DSA. The European Commission is often lauded as a leader in privacy regulation for its implementation of the General Data Protection Regulation (GDPR), but some initial studies concluded that the law had a negative effect on venture capital investment. Several years after the GDPR came into effect, it is unclear whether the law has generated enough beneficial impacts on privacy to outweigh its costs for startup activity. And the impacts of the DSA’s newly-in-force companion law, the Digital Markets Act, remain uncertain as well, with proponents arguing that it will promote fair competition and its detractors claiming it will hinder innovation and compromise users’ security.
Congress has been no better at tracking the impact of the limited number of tech reforms it passes. For example, Section 230—a law that provides immunity to platforms for third-party content they host—was amended in 2018 so that platforms cannot use it as a defense in sex trafficking cases. Since then, the debate about the efficacy of this reform has focused primarily on anecdotes about its effect on sex workers and the number of times it has been used as the basis of prosecution. A bill to study the law’s impact on sex workers in more detail never passed. If policymakers want to better understand the trade-offs of passing new laws to reform Section 230 through empirical assessments of the law’s effect on censorship, safety, and competition, they will not have much to draw from.
Of course, Alito’s role is distinct from that of a policymaker or an academic. Resolving a case may not require significant detail about a law’s impact. In Moody, Alito emphasizes that NetChoice, not the state government defendants nor the justices in their legal analysis, bears the burden of justifying why the company cannot disclose certain information about its members’ content moderation practices when the case is heard on remand. If the sky is falling, NetChoice must prove it.
But even if it’s not necessary in all court proceedings, more detail on policy impact will be important in deepening our understanding of the trade-offs associated with various policy proposals. If lawmakers in another country are considering enacting DSA-style content regulation, they will need more information about how the DSA affected users and platforms in Europe in order to assess how a similar law might affect their own jurisdiction.
A Policy Toolkit for Measuring Impact
To assess the impact of new laws and regulations after they are implemented, lawmakers should build evaluation mechanisms into the laws and policy processes themselves. They might require, for example, a cost-benefit analysis of a new law before it comes up for a vote in a legislature, so that lawmakers can use that analysis to evaluate the proposal. Similar analyses could be mandated after implementation to enable understanding of a law’s effects. These evaluations could be performed by a government office, just as the Office of Information and Regulatory Affairs conducts cost-benefit analysis of government rules. Alternatively, legislation might establish external audit bodies to review impact, composed of academic, civil society, and industry representatives.
Companies also have a role to play—sharing more information on how they are affected by new laws will allow for more robust impact assessments, provided they can offer this transparency without running afoul of privacy, consumer protection, and antitrust law. They might do this by publishing policy impact reports or by sharing data with researchers who can analyze the data and publish the results themselves. They might also include more specific data points in litigation, testimony, and regulatory submissions. If companies are concerned that publishing reports about the negative effects of new laws will provoke the ire of lawmakers, they could instead provide impact data to a separate audit body that could compile data from a diverse set of companies and publish a report with aggregated information.
A fitting conclusion to an assessment of Alito’s comment in NetChoice is to look more closely at the story his phrase references: “Chicken Little.” In the fable, a chicken believes the sky is falling after an acorn falls on its head. The chicken sets out to tell the king, meeting other animals along the way. But in the end, a fox invites the animals to its lair and eats them. The animals’ misplaced fear blinds them to a more dangerous threat.
Whether or not Justice Alito is correct that the sky did not fall after the DSA was implemented, we should remain vigilant about other potential risks. Too narrow a focus on catastrophe may blind us to the gaps in our knowledge that prevent a more subtle understanding of the costs and benefits of newly enacted laws. Beware the fox.
– Matt Perault is a contributing editor at Lawfare, the director of the Center on Technology Policy at the University of North Carolina at Chapel Hill, and a consultant on technology policy issues. Published courtesy of Lawfare.