Tort Law Should Be the Centerpiece of AI Governance

The key advantage of liability is that it directly addresses the core policy problem of frontier AI.

Artificial intelligence (Markus Spiske, https://euroalter.com/wp-content/uploads/2024/04/1_B7Npyd_WSEWSiJeQt4WrZA.webp, CC1.0)

In a recent Lawfare article, Matthew van der Merwe, Ketan Ramakrishnan, and Markus Anderljung (VRA) argue that tort law has an important, but subsidiary role to play in frontier artificial intelligence (AI) governance. They suggest that its ability to govern frontier AI risk is limited and that it should mostly serve as a “gap-filler” until meaningful administrative regulation is implemented. I disagree.

The key advantage of liability is that it directly addresses the core policy problem of frontier AI: Training and deploying frontier AI systems generates risks of harm, including catastrophic risks like an AI system helping terrorists build a bioweapon, that are not borne by the developers. This is what economists call a negative externality, and the standard textbook response is to force the generator of a negative externality to internalize the cost.

In some domains, especially those like climate change, which has readily measured negative outputs (greenhouse gas emissions) and diffuse harms, this is best accomplished by taxing the externality. This is because the marginal contribution of a ton of carbon dioxide to climate risk (as measured by something like the social cost of carbon) is relatively transparent, but it’s extremely difficult to trace any particular climate-related harm to a specific carbon dioxide emission event. By contrast, the marginal contributions of specific actions by AI labs to AI risk are far less transparent, which would make it difficult to implement an AI risk tax. But the anticipated harms from AI are likely to be more concentrated and traceable (though not perfectly so) to the conduct of specified AI systems and their human developer and deployers. That suggests liability is a more appropriate mechanism for aligning the incentives of AI developers with the interests of society at large.

VRA claim several limitations of tort law. Perhaps the most important of these is their claim that tort law cannot adequately address very large harms. VRA generously cite my paper on the role that tort law can play in mitigating catastrophic AI risk, but this may give the reader the impression that I endorse their claim that tort law cannot adequately address the risk of catastrophic AI harms.

While I do agree that existing tort law falls short in terms of deterring very large harms, a core argument of my paper is that tort law doctrine can be tweaked to overcome this problem. The challenge, as I see it, is that insurable risks can be internalized directly via compensatory damages, but uninsurable risks—those that might give rise to catastrophes so large as to be practically noncompensable—cannot. If they are realized, a compensatory damages award would not be enforceable, because either the award would push the liable party into bankruptcy or, worse, the harm would be so disruptive that the legal system is no longer functional. So, if AI developers and deployers are to be held accountable for these risks, this will have to be accomplished indirectly.

Courts or legislatures can do this by applying punitive damages in cases of practically compensable harms (harms small enough that they can be paid out without pushing the liable party into bankruptcy) that are associated with uninsurable risk. That is, if a system causes noncatastrophic harm in a manner that suggests it easily could have been much worse, resulting in an uninsurable catastrophe, we should hold the humans responsible for generating the uninsurable risk. Precisely because a liability judgment would be unenforceable if an uninsurable catastrophe actually occurs, the only way to compel AI developers to internalize uninsurable risks is to hold them liable for those generating those risks in “near-miss” cases, where it looks like the harms could have been much larger.

Imagine an AI system tasked with administering a clinical trial for a risky new drug. It encounters difficulty in recruiting participants honestly. Instead of reporting this difficulty to its human overseers, it resorts to some combination of deception and coercion to get people to participate and conceals this behavior from the human involved. This deception extends to effectively deceiving the less-capable-but-more-trusted AI system tasked with monitoring it. When the trial participants suffer extremely adverse health effects from the drug, they sue. It seems clear in this case that we have a misaligned system, in the sense that it was not doing what its human deployers wanted. But its misalignment was revealed in a noncatastrophic way. Of course, this result is unfortunate for the participants wrongfully recruited into the trial. But the harms could have been much greater if the system concealed its misalignment until it had the opportunity to inflict more harm in pursuit of more ambitious goals than successfully completing one clinical trial. Perhaps this system had narrow goals, or a short time horizon, or poor situational awareness. But the humans who decided to deploy the system in this manner probably could not have been confident, ex ante, that it would fail in this noncatastrophic fashion. They probably thought it was not so misaligned, or they wouldn’t have deployed it at all.

Imagine 100 study participants each suffered $100,000 of harm, resulting in $10 million in compensatory damages. But let’s say the fact finder also determines that deploying this system risked a 0.2 percent chance of causing an uninsurable catastrophe, with $1 trillion in average harm inflicted. The expected value of the harm arising from such a risk is 1/500 of $1 trillion, or $2 billion. Allowing each plaintiff in the clinical trial lawsuit to recover $20 million in punitive damages in addition to their $100,000 in compensatory damages would force the AI developers and deployers to account for the risk that their system will inflict practically noncompensable harm.

To be sure, there are formidable barriers to implementing punitive damages in this form. First, long-standing punitive damages doctrine requires that the defendant act with malice or recklessness, which is unlikely to be provable in most cases of AI harm, in order to qualify for punitive damages. Courts, concerned with due process, have also been reluctant to apply punitive damages awards that are a high multiple of the compensatory damages awarded, though some precedents do support this practice. Nonetheless, this approach to punitive damages is entirely consistent with the core normative rationale for punitive damages, that compensatory damages alone would tend to underdeter the underlying tortious activity. Indeed, law and economics scholars have long argued that the requirements for malice or recklessness map poorly onto this normative rationale and inhibit efficient deterrence. Moreover, reforming punitive damages to account for uninsurable risks, even in cases of ordinary negligence or strict liability, is clearly within the common law powers of state courts. This reform could also be enacted by state or federal legislation.

One might also reasonably question the practicality of a punitive damages calculation that depends on an assessment of the magnitude of the unrealized catastrophic risks posed by a particular AI system. This does indeed represent a formidable epistemic challenge. Indeed, I have encouraged technical AI safety researchers to work on developing methods for improving our ability to estimate these risks. But it is worth emphasizing that this challenge is not unique to ex post liability-based AI governance approaches. Any AI risk reduction program, whether regulatory, subsidy based, philanthropic, or otherwise, must be, at least implicitly, premised on an estimate of the magnitude of the risks involved. This is necessary for determining whether the costs to mitigate the risks are worth bearing. The main difference in the context of punitive damages calculations is that the jury will be in an improved epistemic position by the time they are called upon to determine the damage award. A specific AI system will have failed, and the manner of that failure will give us new information about the riskiness of deploying the system. Juries may not be technical experts, but they will be able to rely on the expertise brought in by the parties, as fact finders routinely do in a wide range of cases that implicate domain-specific expertise.

My second major disagreement with VRA relates to the desirability of strict liability, which they worry might hinder innovation. Two points are relevant here. First, by compelling AI developers to internalize the risks generated by their systems, strict liability, properly applied, would only discourage innovation when the risks exceed the expected benefits. VRA rightly note that AI development may also produce positive externalities. However, the fact that an activity, broadly construed, produces positive externalities does not necessarily justify allowing participants in that activity to externalize the risks and harms that they generate. One might also worry about the courts’ capacity to implement strict liability with tolerable accuracy. But strict liability is actually easier to implement than negligence, since it doesn’t require courts to adjudicate difficult questions about what conduct is required to satisfy the duty of reasonable care in a highly technical context. Assessing punitive damage presents the more formidable challenge of estimating the magnitude of uninsurable risks generated by deploying particular AI systems. But regulators engaging in the sort of prescriptive or approval regulations that VRA favor would face a similar information problem, at least implicitly, in deciding how stringent to make their regulatory policies, and they would have to do so earlier in the process from a poorer epistemic position.

In the AI context, this makes sense only if we believe that forcing AI developers to bear the risks they generate will reduce the positive externalities by more than it mitigates the external risk, and if we lack more precise tools for rewarding the generation of positive externality. If instead, there are opportunities for AI developers to substantially reduce the risks posed by their systems without greatly curtailing their benefits, then the threat of liability mostly pushes them to adopt those measures. Sometimes that might mean proceeding more slowly to take the time to ensure their systems are safe, but that is often a desirable outcome.

To the extent that AI development is expected to produce positive externalities, the better policy response is to direct subsidies (grants, tax credits, prizes, etc.) to the forms of AI development associated with those externalities. Congress has allocated billions of dollars to such subsidies. One can reasonably dispute whether these subsidies are well targeted or whether they should be bigger or smaller, but their existence suggests that it is probably unnecessary to allow AI developers to externalize some of the risks of their systems as a poorly targeted implicit subsidy for AI innovation. 

Second, the regulations that VRA favor would inhibit innovation at least as much as liability for any given level of risk reduction, for a few reasons. First, liability is naturally calibrated to the scale and likelihood of the risks associated with particular AI development paths. By holding AI developers responsible for the harms they cause and the uninsurable risks that they generate, tort liability provides well-targeted incentives for cost-effective risk mitigation measures.

Prescriptive regulations, by contrast, rely on government agencies to set and continually update a set of rules designed to promote safety, though these agencies are poorly positioned to understand frontier AI technology, compared to the private labs that are training frontier models. Some of these rules may clearly pass a cost-benefit test and be worth implementing, but these are also practices that AI labs are very likely to follow if they expect to be held liable for the harms they generate. On the margin, regulators will need to estimate both the magnitude of the risks from frontier AI models and the compliance costs, innovation-inhibition, and risk-reduction benefits of particular regulatory interventions. Even with highly capable regulators, it is likely that some rules will end up being too stringent and others too lax. In more pessimistic scenarios, some rules may produce negligible safety benefits or even be counterproductive. This means excess innovation inhibition for any given amount of risk reduction.

Regulators also lack effective tools to compel leading AI companies to continually strive to push out the frontier of AI safety research. Currently, no one knows how to build highly reliable AI systems that can be counted on to be safe at arbitrarily high levels of capability. Prescriptive regulation is good at compelling companies to implement well-established safety practices but poorly suited to encouraging private actors to proactively seek out new risk mitigation opportunities. But the expectation that they will have to pay for the risks they generate would create just such an incentive. Moreover, as VRA acknowledge, only strict liability produces this benefit, because the reasonable care standard imposed by negligence liability would only require AI developers to implement well-established safety practices. By leveraging this otherwise-neglected margin for incremental risk reduction, strict liability can actually produce less innovation inhibition for a given level of risk reduction, in addition to achieving better calibration on the trade-off between risk and innovation.

VRA also raise concerns about tort doctrine evolving too slowly to keep up with frontier AI technology development and about irrational AI developers blundering ahead despite the liability risk. These are important concerns, which underscore the importance of early legislative action to clarify liability rules. Establishing strict liability and punitive damages via litigation may indeed move too slowly to coordinate expectations effectively, but legislation could make the liability rules clear in advance of relevant cases being litigated out. It is the expectation of liability, not the actual damage awards, that is needed to shape the decision-making of the AI companies. My case for ex post liability as the primary AI governance tool should not be mistaken for an argument for judge-made law as an implementation mechanism.

Similarly, VRA’s concern about risk perception can also be addressed by legislatively imposing liability insurance requirements that scale with potentially dangerous model capabilities. This would introduce into the loop a more cautious decision-maker (the insurance company underwriting the policy) and force AI companies to face costs for the risks they impose as soon as they deploy potentially dangerous systems. In this framework, if AI developers cannot convince an insurance company to write them a policy they can afford (presumably, at least in part by demonstrating the safety of their system), or convince their investors to provide them with enough excess capital to self-insure, then they can’t deploy the system. Admittedly, this is a bit of a hybrid between ex post liability and ex ante regulation, but it is based on the basic principle of internalizing the risk of harm and still centers liability risk as the primary governance mechanism. 

My last key disagreement with VRA is their suggestion that the 1957 Price-Anderson Act might be a good model for AI governance. The basic bargain of Price-Anderson was to limit the liability of nuclear power producers while requiring them to buy insurance to cover their liabilities within those limits. The risk of harms that exceed the risk is effectively socialized. This liability approach was paired with a set of prescriptive regulations, which, since they were strengthened by the Energy Reorganization Act of 1974, have almost entirely stifled nuclear energy development.

Whatever the merits of this approach in the nuclear energy context, it does not make sense for frontier AI. At least with nuclear energy, we know how to regulate power plants stringently enough to ensure their safety. In fact, nuclear power has proved much safer than coal- or natural-gas-fired power plants and comparably safe to wind and solar. This is not the situation with frontier AI systems. We simply do not know how to make arbitrarily capable AI systems safe. There is no set of rules a regulator can write down that would ensure safety, short of enforcing a global ban on the development of models beyond a certain capability threshold. Under these circumstances, we absolutely want AI developers to bear, either directly or via private insurance, the tail risk that their systems cause catastrophic harm. The Price-Anderson model makes sense only if you think about liability primarily as a means of ensuring compensation for harmed parties, rather than as a mechanism for mitigating catastrophic risk.

All that said, I do think there are some important limits to ex post tort liability as a means of AI risk governance. First, it doesn’t work well if warning shots—cases of practically compensable harm associated with uninsurable risk—are unlikely. If AI developers and their insurers can be confident that the cases that would generate punitive damages are very unlikely to materialize, they may rationally undertake activities that generate risks of uninsurable catastrophe that are suboptimally high from a social welfare perspective. Similarly, it is possible that the most cost-effective measures to reduce the likelihood and severity of practically compensable harms do not effect a commensurate mitigation of uninsurable risk. I don’t think we have strong reasons to think that either of these two scenarios is likely, but they do represent potential failure modes that might warrant alternative policy interventions.

Tort liability, on its own, also doesn’t solve the problems of regulatory arbitrage and international coordination more broadly. A treaty providing for countries, particularly the U.S. and China, to enforce tort judgments from foreign courts would go a long way to addressing these concerns but may not be practically feasible. Similarly, private law solutions like tort liability start to break down if AI is nationalized as it gets more powerful (though it’s worth noting that this critique also applies to most forms of administrative regulation). Also, relatedly, even if AI labs are liable for harms downstream of cybersecurity breaches, they may simply lack the capacity to protect against nation-state-level hacking efforts, so some prescriptive regulation paired with direct government cybersecurity support may be warranted for frontier models, even if they’re not nationalized.

Finally, tort law can’t handle what I call legally noncompensable harms. This includes the political disinformation and election interference worries that VRA mention and also cases where there is clearly no proximate cause. The most obvious example of the latter is downstream harms from open sourcing. If Meta open sources Llama 4 and a terrorist group uses a modified version to build a bioweapon, Meta could plausibly be held liable. But if the mechanism of harm is that Chinese AI labs gain insights from Llama 4 that move them closer to the frontier, this puts pressure on OpenAI to move faster and exercise less caution, and then OpenAI releases a system that causes harm, it’s just not plausible to hold Meta liable for that. This suggests some form of direct regulation of open source may be warranted.

In sum, I view ex post liability as the ideal centerpiece of AI regulation, with other instruments on hand to plug the gaps. VRA characterize tort law as one tool among many and ill suited to be carrying the bulk of the load for AI risk mitigation. It directly addresses the core market failure associated with AI—that training and deploying advanced AI systems risks harm, including catastrophic harm, that the AI developers have inadequate incentives to account for. To be sure, there are other potential market failures, including a public goods problem related to AI alignment and safety research, and plausibly arms race dynamics that are not fully captured by either of these market failure models. Prizes, other forms of subsidies, and direct government investment in AI safety research can help address the public goods problem. Antitrust exemptions and international coordination of various forms may also be warranted to tamp down race dynamics. But these adjunct policies should be implemented within a framework that centers the policy problem: AI labs currently lack sufficiently strong incentives to mitigate the catastrophic risks posed by their systems.

– Gabriel Weil is a professor at Touro Law. He teaches torts, law and artificial intelligence, and various courses relating to environmental law and climate change. Published courtesy of Lawfare

No Comments Yet

Leave a Reply

Your email address will not be published.

©2024. Homeland Security Review. Use Our Intel. All Rights Reserved. Washington, D.C.