Shields Up For Software

As the Biden administration seeks to develop software liability legislation, consider a regime that incorporates one safe harbor and one “inverse safe harbor.”

“Developer Tools”

As part of the National Cybersecurity Strategy, the Biden administration seeks to “develop legislation establishing liability for software products and services,” which would include “an adaptable safe harbor framework to shield from liability companies that securely develop and maintain their software products and services.” We propose that this software liability regime incorporate one safe harbor and one “inverse safe harbor.”  The first would shield software creators and vendors from liability if they follow enumerated best practices in design, development, and implementation. The second—the inverse safe harbor, or sword—would automatically impose liability on developers who engage in defined worst practices. The safe and inverse safe harbors will provide certainty to regulated entities, reduce administrative costs, and create incentives for improving security. This article describes the twin safe harbors, their policy goals, and the key design criteria for their success

Policy Rationales

At present, software is almost entirely exempt from the usual liability regimes that apply to products and services. Tort law excludes software for fear that imposing liability would cripple innovation. In addition, tort’s economic loss doctrine prevents recovery for pecuniary harm not linked to physical damage or injury, creating a major obstacle to recovery for harms caused by flawed software. Contract law enables software vendors to disclaim warranties and bar users from litigation even if they suffer harm caused by code. 

The administration’s cybersecurity strategy—as applied to software liability—would alter this status quo. A well-constructed liability regime would improve cybersecurity and penalize shoddy code while protecting software producers who follow reasonable practices. However, the new system will create risk, especially from uncertainty, for those who create and distribute code. 

The proposed software liability regime rests on a standard of reasonable care for design, development, and maintenance of code. Failure to meet the standard of care is negligence. This standard-of-care approach is widely used and often framed as a reasonableness inquiry. For example, in medical malpractice cases, the criterion is typically how a reasonable physician would have treated the patient. Standards of care are helpfully flexible and usually consider each case’s individual circumstances. However, this fact-specific analysis requires gathering considerable information and is thus costly. Decision-makers also inevitably make mistakes. Some actors who fail to meet the standard of care will escape liability, and some who conform to it will be punished. Safe harbors can improve this design. They provide certainty to regulated entities: immunity for following defined best practices, and clear liability for engaging in terrible ones. And safe harbors shape incentives: Coders and vendors that cannot or will not improve their code and business practices will be pushed out of the market. 

Safe Harbors: Automatic Immunity, Automatic Liability

We propose augmenting the standard of care for secure software with two safe harbor provisions: one that automatically imposes liability for defined worst practices, and one that automatically confers immunity for following enumerated best practices.

The Sword: Automatic Liability

First, the liability regime should include minimum requirements for software development by defining a set of worst practices that automatically create liability (the “inverse safe harbor,” or sword). For example, including a hard-coded account with administrative privileges and a password that cannot be changed is clearly an unacceptable decision. When hard-coded credentials are used in operational technology products in critical industries such as power, for example, an adversary who discovers the password could change how the installed products operate. Developers who are careless or indifferent to security should face significant penalties. Risks of overdeterrence are minimal: There is no reasonable basis for engaging in worst practices.

The inverse safe harbor should begin by including practices for which there is widespread consensus that they are dangerous. We suggest starting with the bad coding practices from the Common Weakness Enumeration list managed by the Cybersecurity and Infrastructure Security Agency (CISA). These are programming or design choices that are indefensible, albeit at differing levels of severity. Over time, the list of worst practices should grow. Indeed, decisions about compliance with the standard of care will likely identify conduct sufficiently far below the standard that it merits inclusion as a worst practice. Security-related litigation and settlements by the Federal Trade Commission (FTC) pursuant to its Section 5 authority could also provide examples of sufficiently bad practices—since the FTC must often make determinations about the adequacy of organizations’ security practices.

The Shield: Automatic Immunity

Second, the affirmative safe harbor defines conduct that makes developers immune from liability for particular features and design choices. The immunity safe harbor includes best practices: coding and design choices that maximize security and minimize risk. Immunity should be issue specific; software might comply with some best practices but not others, and the liability shield would apply only to compliant aspects. Developers will seek to incorporate these best practices because doing so guarantees freedom from liability—if not for other, more laudable reasons. Safe harbor provisions are common in regulatory regimes. For example, the safe harbors for online service providers created by the Digital Millennium Copyright Act (DMCA) ensure these entities cannot be held liable for transporting or storing content that infringes copyright if they follow prescribed rules, such as removing access to infringing material on notification. Nearly all service providers comply with the DMCA safe harbor requirements because they value the legal certainty immunity provides. 

The immunity safe harbor should contain two types of requirements. The first are process-based mandates, which require organizations to follow defined steps or protocols as part of developing and maintaining software code. For example, organizations might be required to implement a vulnerability-reporting mechanism and to document how they respond to “bug reports” they receive. The second are substantive provisions, which are specific requirements for features, design decisions, and the like. For example, developers might have to create programs that automatically encrypt sensitive data using a cipher or system with defined minimum strength (for example, quantum-resistant encryption). American cybersecurity regulation tends to concentrate on process to the exclusion of substance, to the detriment of software security and cybersecurity more generally. Process-based approaches can help ensure consistency across an organization and help avoid inadvertent errors that compromise security. However, they are less effective in ensuring that the process’s outcome is sufficiently secure. Some security issues are amenable to substantive rules. Data should be encrypted. Memory-safe languages should be used. Data inputs should be checked for their type and length to prevent buffer overflow attacks. Other issues are highly contextual and better managed by standards or by process obligations.

Enumerating best practices for the safe harbor will take time and may be context dependent. William McGeveran, professor of information law and data privacy at the University of Minnesota Law School, offers an analogous approach in the cybersecurity context, suggesting that an initial set of best practices could be drawn from the requirements included in most or all extant certification frameworks for cybersecurity. Software security best practices could be mapped in similar fashion. Guidance from industry leaders, such as Microsoft with its Security Development Lifecycle and Google with its Cloud security best practices, can offer candidates for inclusion. Finally, we believe that the immunity safe harbor should begin with requirements that apply to all software code; though as the safe harbor develops, it may be appropriate for certain mandates to be sector specific.

Updates

The lists for both safe harbors will require frequent revision. We suggest that the CISA be tasked with ongoing, regular information-gathering and rulemaking to update the twin safe harbors. The agency should draw on industry-specific knowledge from expert sources, ranging from federal agencies to private trade organizations. The Business Software Alliance can offer input about general software development best practices, while the sector risk management agencies and information sharing and analysis centers will have industry-specific insights. With respect to agencies, the Department of Health and Human Services (in its role enforcing the Security Rule established by the Health Insurance Portability and Accountability Act of 1996) and the Federal Aviation Administration (in its role ensuring safe air travel) will have invaluable domain-level expertise and networks. 

Ideally, the CISA would be authorized to conduct both public and private consultations, followed by an abbreviated notice-and-comment period, to issue annual updates. This design employs a co-regulatory model, in which industry expertise undergirds and strengthens regulatory oversight. In addition, the CISA should have an emergency capacity to add to the inverse safe harbor when new information about highly risky practices emerges. Last, the CISA should be empowered with a confidential reporting function so that software developers, testers, and other experts could report risks and vulnerabilities without fear this material could be disclosed in litigation or used for regulatory enforcement action. The Cybersecurity Information Sharing Act of 2015 could serve as a useful model in this regard. Its provisions prohibit cybersecurity threat information shared with the government pursuant to the act from being used by the government as the basis for a regulatory or enforcement action against the entity that shared the information. Reporting could be made mandatory in some circumstances, as it already is for data breaches in many instances.

Risks

The safe harbors’ certainty could have undesirable consequences. For example, software developers may have little incentive to take security precautions greater than the immunity safe harbor’s requirements, which could effectively convert those mandates into a ceiling. The liability regime can mitigate this risk by establishing safe harbor requirements that are sufficiently rigorous to ensure strong security even if developers treat them as sufficient rather than necessary.

Key Design Choices

Here we highlight six key system design choices for the safe harbors.

Deterrence

Selecting requirements for the immunity safe harbor (the shield) is critical for at least two related reasons. First, if a developer follows a given requirement, the threat of legal liability for that aspect of the code disappears. Choosing the correct level of precautions is vital to setting incentives correctly: If the standard is too demanding, providers may overinvest in it to the detriment of other security measures; if it is too lax, providers can immunize themselves despite supplying code with subpar security. Second, for many software providers, the safe harbor will function as the de facto practical standard of care, because these entities will not want to expose themselves to legal risk through noncompliance. Practically speaking, the demands of the immunity safe harbor will be the rules of the road for major software developers. 

Legacy Code

Both safe harbors must address legacy code. Software standards and coding practices change over time, hopefully for the better. We suggest a bifurcated approach to legacy code. The immunity safe harbor should apply immediately on adoption to any code, of any age—compliance creates immunity. 

By contrast, the penalties of the inverse safe harbor (sword) should be implemented with a time lag. Developers did not have notice of these requirements and their legal effect when designing older code, and they will need time to bring it up to compliance. Computer science studies suggest the average software life cycle is six to eight years. We suggest a time lag for implementation of the inverse safe harbor half as long: three years. Microsoft, for example, supports a given Service Pack of major fixes to its Windows operating system versions for 12-24 months, with a standard support cycle of five years. The CISA, or industry-specific security regulators, should be empowered to grant waivers for up to two additional years on showing of significant need or hardship. For example, the software that runs most ATMs in the United States is written in COBOL because that language runs efficiently on mainframe computers and is optimized for business functions. However, there are very few remaining COBOL programmers, making replacing that code difficult; in fact, IBM has begun to use artificial intelligence capabilities to convert COBOL to Java. 

Critical Infrastructure

Software in critical infrastructure might be held to a more demanding set of best practices to obtain immunity. Specialized treatment for critical infrastructure systems could drive progress toward greater security in a context where harm from insecurity would be particularly severe. One challenge is that the definition of “critical infrastructure” has expanded significantly over time: A 2003 presidential directive on the subject included national monuments and icons within its ambit, where they remain today. When critical infrastructure is everything, it is nothing. It is possible, if not likely, these more rigorous standards will generate industry resistance. The more focused definition of a “covered entity” (that is, ones of higher criticality) under the Cyber Incident Reporting for Critical Infrastructure Act of 2022 (CIRCIA), as currently contemplated by the CISA pending its final rulemaking, could be an appropriate starting point for specialized best practices if they are deemed necessary. Another possible starting point would be the “National Critical Functions” as defined by the CISA. These are the functions deemed so vital to the country that their disruption would have a “debilitating effect” on national security, national economic security, national public health or safety, or any combination thereof. However, regulators should evaluate the progress made under the new software liability regime generally before deciding whether a bespoke subsystem for critical infrastructure is necessary. If it is, the tailored immunity safe harbor regime should apply only to the most vital infrastructure contexts, such as those defined under the CIRCIA.

Testing

Determining compliance with the immunity safe harbor is challenging, but critical. There are essentially three options: evaluation after the fact (such as post-breach), self-certification, or external testing. 

Liability based on ex-post analysis is a core feature of the standard-of-care approach, so we do not discuss it further here. Self-certification risks inadequate investigation and concealment, although there are examples of mechanisms that could reduce these inherent problems (such as the required certification of a firm’s internal controls under Section 404 of the Sarbanes-Oxley Act, backed by potential civil and criminal penalties, or the Department of Justice’s Civil Cyber-Fraud Initiative). External testing is undoubtedly the most effective option, although also likely the most expensive. However, establishing effective external verification, especially regarding the independence of auditors, is a significant challenge. 

We suggest that developers initially be allowed to choose a mechanism for compliance determinations. This could generate a market: from less rigorous and costly self-certification to more searching but authoritative inspection by an expert private organization, such as cybersecurity firms or the consulting arms of accounting firms, or a government entity, such as the CISA or an industry-specific regulator. This choice itself could create incentives, since the selection of compliance certification mechanism could provide an honest signal to software consumers and to regulators about the security of a particular piece of code. Furthermore, auditing of publicly traded firms, supervised by the Securities and Exchange Commission, provides a well-working model for this testing requirement.

If accumulated experience shows that this menu of certification mechanisms does not provide sufficiently reliable information, government auditors could step in as the permanent testing body. The CISA might establish an auditing function, much as the Federal Reserve does for the banking system. In any case, we believe that testing and auditing of code that seeks immunity is critical to the safe harbor and to the liability regime itself.

Transitional and Other Support

Compliance with the proposed software liability regime will impose costs on developers, who will not be able to fully capture the benefits of the resulting improved security. Software security is thus a classic example of a positive externality. This raises two concerns: transition costs, and effects on small and medium-sized businesses (SMBs), including many open-source software (OSS) organizations and academic institutions. We propose two complementary interventions: providing carrots and reducing the size of the stick.

Greater software security achieved through the proposed legal liability system will benefit society generally. The software entities that provide this benefit, potentially at significant cost, will not earn enough revenues to offset those expenditures. Government intervention is one classic answer to externality problems. If we want software providers to undertake costly action that provides broadly shared benefits, government should subsidize their efforts. We suggest either direct subsidies to software developers or tax credits for expenses linked to compliance. However, these costs will diminish with time and may well diminish rapidly as developers learn to implement best practices (and avoid worst ones) more efficiently. We propose that transitional financial support from government last for the same period of time as the exemption for legacy code: three years, with up to two additional years for cause on an individual basis. The federal government might also speed replacement of less secure legacy code by expanding financial support if a software entity moves more quickly to remediate its existing products. For example, a firm that replaces older, substandard code in two years rather than the allotted three might be granted funding equal to the time saved—here, an additional year of support (four years rather than three). 

Similarly, if concerns that the immunity safe harbor creates a ceiling for security materialize in practice, funding could create incentives for developers to follow preferred best practices that implement additional safeguards. For example, if the immunity safe harbor did not mandate that developers use memory-safe languages, additional government funding could be conditioned on utilizing them. Such support also creates incentives for software developers to implement precautions efficiently based on private information and expertise.

Reducing the size of the stick could also help SMBs, including OSS organizations that are essentially volunteer collectives, by ensuring that they do not cease development due to liability risks. One mechanism for limiting disincentives would be to set a liability cap: a maximum amount of damages for which an SMB could be held responsible, either as a flat ceiling (based on the size or revenues of the entity) or as a multiple of profits, revenues, or the like. This creates a risk of under-deterrence, since organizations may not face liability equal to the potential or expected harm they generate. Thus, we believe if a liability cap is implemented, it should not apply to software used exclusively or primarily in critical infrastructure. (We include the limiting language “exclusively or primarily” because otherwise most significant software programs, including staples such as operating systems and web browsers, would be drawn within this exception, thereby vitiating it.) However, developers need not face the full social cost of insecurity to have adequate incentives—a level of potential damages that forces market exit or bankruptcy is likely to suffice. Liability caps may also be necessary to ensure the survival of many software SMBs, including startups. Small businesses, open-source organizations, and academic institutions tend to be wellsprings of innovation and receive support based on that generativity under other government programs, such as patent policy.

Mechanism

The optimal mechanism to implement the safe harbors, and the software liability regime more generally, is comprehensive federal legislation. However, aspects of our proposal could be partially implemented through alternative means. For example, compliance with the safe harbor requirements could be a condition of eligibility for certain software and information technology purchases by the federal government, such as those imposed via the Federal Acquisition Regulation (FAR) and Defense Federal Acquisition Regulation Supplement (DFARS). Or compliance could be encouraged, yet formally made optional, by use of federal subsidies or tax credits without implementation of the liability regime. State and local governments could undertake similar actions. While piecemeal measures are second best, they would nonetheless improve on the current situation.

Conclusion

Including both an immunity safe harbor (the shield) and an inverse safe harbor (the sword) in the proposed software liability regime will make the system more predictable, effective, and efficient. These zones of immunity and absolute liability complement the standard of care and draw on distributed information about software design and implementation. Since under our proposal software entities that comply with the safe harbors would obtain significant reductions in costs, including expected liability risk, the requirements imposed would likely shape conduct in a direction that advances the goals of the National Cybersecurity Strategy.

– Derek E. Bambauer, Melanie J. Teplinsky, Published courtesy of Lawfare

No Comments Yet

Leave a Reply

Your email address will not be published.

©2024. Homeland Security Review. Use Our Intel. All Rights Reserved. Washington, D.C.