It’s Morning Again in Pennsylvania: Rebooting Computer Security Through a Bureau of Technology Safety

In order to escape the computer security bootloop, Congress can create a new technology safety regulator of last resort—the Bureau of Technology Safety (BoTS).

(Free Photos from Pixabay, https://commons.wikimedia.org/wiki/File:ComputerProgrammer.jpg: Public Domain).

In the 1993 film “Groundhog Day,” a weatherman played by Bill Murray is trapped in the human version of a bootloop. No matter what he does, he wakes up on Groundhog Day in a bed and breakfast in Punxsutawney, Pennsylvania. When his efforts to escape the loop fail, he descends into self-destruction. He steals a pickup, kidnaps a groundhog, and launches himself off a cliff, all to no avail. For the past two decades, the United States has been living in the computer security version of “Groundhog Day.” But, in our case, we will not get endless loops to self-correct. It is time to interrupt the computer security bootloop.

We can start this interruption by (re)framing current computer security policy as the lynchpin of broader technology safety issues—issues of predictable physical, economic, and public safety harms that are tragically escalating with each generation of technology. But, importantly, we should recognize that we are not working on a blank slate; computer security policy efforts to date include various forms of software liability that already exist, even though these efforts have fallen short at interrupting the computer security bootloop. So, what should we do next? Our policy interruption should amplify forms of computer security liability that already exist and fill in critical technical and legal technology safety gaps. Immediate efforts should focus on messaging the technical and legal importance of threat modeling for all organizations and launching a new Internet Epidemic Service of health care security rapid response teams. However, the core element of a successful long-term computer security response involves asking Congress to adopt a broader lens: Congress should create a new technology safety regulator of last resort—the Bureau of Technology Safety (BoTS). 

The Computer Security Bootloop: The Public-Private “Big CyberShort”?

Today’s computer security threats—whether at a national or an individual level—are a key subset of technology safety challenges: The physical and economic safety of the public, including access to core government functions, is contingent on confidentiality, integrity, and availability of technology. This contingency matters even more today than it did five years ago. Yet, government functions have been disrupted by security compromises on federalstate, and local levels, and at least 870 critical infrastructure entities were targets of ransomware in 2022. As the Cybersecurity and Infrastructure Security Agency (CISA) explained recently, deaths tied to health care ransomware are mounting, and hospitals are now closing due to ransomware. Meanwhile, Nasdaq notes that 2023 set a record for corporate security compromises. According to the FBI’s 2022 Internet Crime Report, total losses to security-related fraud jumped from $6.9 billion in 2021 to $10.3 billion in 2022. Losses from botnets, phishing, malware, spoofing, data breaches, and tech support scams all increased. The FBI’s Recovery Asset Team saw a 64 percent increase in initiated actions against business email compromise complaints between 2021 and 2022. Social engineering attacks also rose over 130 percent, particularly against people over 60, who reported more than $3 billion in losses to various security-related frauds. The future of security and technology safety looks even darker, more “awful,” and more directly connected to loss of life from both external and insider threats, particularly as systems (overly) reliant on artificial intelligence (AI) and its potentially flawed hardware and software introduce or otherwise exacerbate public safety risk

But, despite the growing technology safety threat, none of the current policy challenges in “cybersecurity” are new—and neither are issues of security liability. While precise threat vectors may not always be known, the general trends in attacks, incidents, and internal control failures in computer security have long been tragically predictable to experts. Indeed, today’s computer security bootloop has actually been with us for over four decades. The inadequately patched, recurring vulnerabilities of the 2000s morphed into attack vectors for state-sponsored persistent threat actors during the 2010s; today they linger as threats to infrastructure, supply chains, and economic and public safety at scale. The integrity and availability security failures that ravaged markets during the 1960s presaged the integrity and availability security issues in markets in the 2000s and 2010s. The excess deaths from technical flaws in medical devices of the 1980s, were harbingers of the excess deaths tied to today’s ransomware epidemics. The virusesdefacements, and design flaws of the 1990s begat the phishing, botnet, and other attacks of the 2000s, which in turn foretold of more recent disruptive compromises. The security-invasive DRM rootkits of the 2000s (and their malicious repurposing and failed fixes) that cost both the government and the private sector millions of dollars forewarned of purpose-built deception technologies, such as the auto defeat devices of the 2010s. But, technology hype cycles often distract us from seeing the recurring, long-term nature of the problem: We are driving toward a technology safety cliff. 

New vulnerable and flawed technologies are being built on top of old vulnerable ones, and technical debt is revolving. Escalating forms of technology fraud, unfairness, and abuse are harming economic stability, government functions, and the public’s physical safety; security harms amplify with each generation of technology. Meanwhile, repeat private-sector players sometimes seem impervious to enforcement. Sometimes they exploit coverage gaps across agencies’ enabling statutes, and sometimes they game suboptimal coordination when a single technology safety problem cuts across multiple agencies’ authority. Their “alignment” plans regularly fail to consider the economic and physical safety impact of their technologies on members of society who are not their investors, developers or existing users. The bootloop repeats. Yet, the time value of effective policy response—much like the time value of money—matters. 

Adopting a historical perspective, we have seen a version of this dynamic before: It is perhaps uncomfortably reminiscent of the way that inadequately vetted “innovative” but unsafe financial products and services were built on top of each other in the 1980s and 2000s, ending in financial crashes. We are at risk of trading in the harmful dynamics of the “Big Short” for a shiny technology version—a “Big CyberShort” environment that undercuts the trustworthiness and safety of both the private and the public sectors.

What Have We Already Tried?

The efforts to escape the computer security bootloop have been manifold: public-private partnerships through Information Sharing and Analysis Centers (ISACs) and other “loose coalitions”; efforts to rebrand computer security problems as “cybersecurity” to make them more exciting sounding and less “unsparkly;” processes for coordinated disclosure of vulnerabilities; the formalization of some computer security reporting; and enhanced authorities in various statutes, most recently in the Cyber Incident Reporting for Critical Infrastructure Act of 2022 (CIRCIA) and the Consolidated Appropriations Act of 2023. 

Existing bodies of law have already resulted in successful findings of liabilitysettlements, and criminal consequences from internal control failures in security. Yet, even when notice of critical, remotely exploitable vulnerabilities comes directly from the government, companies sometimes have “decided not to resolve these vulnerabilities.” Significant technical debt in computer security frequently goes unresolved—despite computer security enforcement and guidance efforts by the Food and Drug Administration (FDA), the Federal Trade Commission (FTC), the Consumer Financial Protection Bureau (CFPB), the Department of Justice, the Securities and Exchange Commission (SEC), the Environmental Protection Agency, the Financial Crimes Enforcement Network (FinCEN), CISA, and others; standards from the National Institute for Standards and Technologies and the International Standards Organization; multiple executive orders from the White House; and the creation of an Office of the National Cyber Director. Instead, some companies choose “enforcement roulette,” playing the odds that historically cash-crunched agencies will pick a different target. But robust quarterly returns will become impossible if national technology safety is fatally compromised. At some point, investors will avoid organizations (and market economies) compromised by and reliant upon unsafe and untrustworthy technologies; at some point, organizations vulnerable by choice will no longer be able to privatize the profits and socialize the costs of their computer security shortcomings. Meantime, the public will be exposed to and even blamed for escalating technology safety harms. 

Why Is This So Hard? What Are We Missing?

Computer security (also known as information security or cybersecurity) is hard—even for those of us who have worked in the field for over two decades. Security issues and vulnerability are always reciprocal in both form and impact. The technical nature of vulnerability is interwoven with human systems, and computer security issues cut across expertise and sectors. Lax internal control practices—both technical and human—in a physical space open the door to lateral movement by an attacker as part of a multistep compromise involving digital means and vice versa. Because compromise can happen wherever the flawed or vulnerable software and hardware (and people) are deployed, private-sector vulnerability can lead to public-sector compromise. For example, an Internet of Things company’s failure to defend aggregated user location data may reveal a secret military base. A vulnerable medical device may be used in both private hospitals and Veterans Administration facilities, and an attack against device integrity could physically harm patients in both simultaneously. The virtual museum of a U.S. public university can experience an availability failure and “burn down” through a physical security failure in a remote EU private server facility. In other words, computer security and technology safety are not just “cyber” and not just domestic problems. They are also not just “tech company” problems; they are the problems of all organizations that rely on technology. 

Policy and legal analysis often misses this bigger, repeating picture of reciprocal security vulnerability. Frequently, the instinct of policy and legal experts is to artificially compartmentalize along traditional boundaries between the public and private sectors and into (self-referential) subfields. For example, legal analysis often inadequately recognizes that security liability is not just a tort law issue. It is simultaneously an issue governed by generations of existing securities regulationcorporate lawprocurement regulationcontractual defaultsconsumer protection and market fairness, competition policy, medical device regulationcopyright, and other related areas, including the First Amendment. Also, a legal duty to warn about known security flaws and unsafe technology products and services arguably already exists in common law. Indeed, some companies are acting in line with the existence of this duty. In other words, security liability already exists under various different legal names, even if it is not adequately robust. 

Miscalibrating any of these pieces in technology safety miscalibrates the whole; these are not discrete regulatory problems. Further, economic and technology history, particularly lessons from the 1960s and 1970s-era “books and records” or “back office” crisis, offer stark warnings that escaping a national spiral downward requires robust oversight from a single coordinating agency. Yet, no single U.S. agency aggregates the depth of technology safety knowledge, enforcement expertise, and oversight authority required to escape the computer security bootloop. We should turbocharge what already exists, align efforts, and fill in the gaps. 

What Should Come Next: Facing Technical (and Legal) Debt

On March 2, 2023, the White House released a National Cybersecurity Strategy, opening the door to interrupting the national computer security bootloop and signaling the launch of a national technology safety policy. The strategy highlights that computer security is linked inextricably to broader questions of technology safety and the challenges of reciprocal security vulnerability. Specifically, the strategy articulates that “national security, public safety, and economic prosperity” require that we consider how “to reduce risk and shift the consequences of poor cybersecurity away from the most vulnerable in order to make our digital ecosystem more trustworthy,” including through “[s]hifting liability for software products and services,” “coordinated, collaborative action … in the innovation of secure and resilient next-generation technologies and infrastructure[,]” and “[l]everaging international coalitions and partnerships … to counter threats to our digital ecosystem through joint preparedness, response, and cost imposition.” Five action items would begin to recalibrate national computer security policy toward the technology safety vision embodied in the National Cybersecurity Strategy:

  1. Reiterating and expanding upon statements from past Executive Orders, the White House can clearly message that: 

a. Computer security/cybersecurity risk is an economic and public safety issue for everyone, like nationwide E. coli outbreaks or collapsing bridges and interstates. It impacts all public and private organizations and all members of the public. 

b. Using formal threat modeling methodologies that are standard in the information security expert community, all public- and private-sector organizations should regularly conduct and document technical threat modeling for security risk, including mapping, testing, auditing, and tracking their internal operations; all externally facing operations/products and services; and their organization’s progress toward eliminating technical debt, including both internal control and governance shortfalls and external security/safety risk imposed on the public through their operations. 

c. In addition to technical threat modeling, all public- and private-sector organizations should engage in a second layer of public safety threat modeling or meta-modeling focused on assessing both public safety risk and organizations’ own likelihood of future legal liability. This meta-modeling should assess three “security CHI” technology safety factors: context sensitivity, harm potential and nature, and intent/knowledge (in a legal sense) of builders, deployers, and maintainers of technologies. Security CHI offers a First Amendment-sensitive framework that will assist courts in evaluating future security cases. It also creates a springboard for sector-specific meta-modeling, building a shared language for harmonizing U.S. computer security/technology safety policy with that of international partners. Finally, public safety meta-modeling also acts as a defense against insider attacks on the organization itself by its officers. 

d. Failure to engage in accurately documented technical and public safety threat modeling in line with computer security industry best practices signals a presumptive lack of due care in internal controls. The burden of proof otherwise rests with the organization. 

e. A duty to disclose known unsafe technology conditions to the public exists across industries. The existing agency enforcement and guidance activity referenced above is consistent with this assertion, as is the common law duty to warn and protect from nonobvious threats to safety by persons who choose to engage with the public and who possess superior knowledge of a safety issue. 

  1. The White House can instruct all federal agencies to annually file with the Office of the National Cyber Director their threat modeling documentation for their past fiscal year and a self-assessment of improvements in internal and external security efforts in line with their missions since their last report. Each agency should also have a dedicated page on its website with information about computer security-related enforcement, guidance creation, rulemaking, hearings, and other public technology safety efforts. 

  1. The White House can instruct CISA to collaborate with ISACs, sector coordinating councils, and relevant agencies to draft sector-specific threat modeling and incident response playbooks modeled on the FDA threat modeling and incident response playbooks for sectors where no such playbook already exists.

  1. The White House can address the rural health care security crisis (and assist hospitals with avoiding potential security liability exposure) through the launch of an Internet Epidemic Service (IES) using the model of existing CDC programs, in collaboration with the FDA, the Administration for Strategic Preparedness and Response, the Centers for Medicare & Medicaid Services, and CISA. The IES should be composed of three sets of teams—stabilize, support, and sustain teams. Borrowing the highly successful model of FDIC emergency response teams, which prevent public panic through hands-on management of bank failure scenarios, the IES stabilize teams would perform emergency management of hospitals suffering ransomware and other technology safety incidents. Support teams would then remediate critical technical debt post crisis, while sustain teams would transition the hospital to security self-maintenance going forward. As the FDIC’s deft handling of the Silicon Valley Bank collapse in 2023 showed, triage teams already play a crucial role in stabilizing systems beyond the hospital setting. Assuming the success of the IES, the various sector-specific agencies and regulators can collaborate with the CDC and CISA to stand up IES teams for each of the other critical infrastructure sectors.

  1. In collaboration with Congress, the White House can propose and launch a new technology safety regulator of last resort—an independent coordinating and gap-filling enforcement agency, whose mission is to longitudinally track and enhance the safety of technology products, services, and practices in a technology-neutral and sector-neutral manner. In other words, the goal is to supplement and not supplant existing agency and state authority and efforts in technology safety, including computer security, filling gaps as national technology safety requires. When socially destabilizing harms emerged from capital markets, the federal government created the SEC in the 1930s and the CFPB in the 2010s to rebuild trustworthiness of financial products. Today, a parallel new agency is required to rebuild trustworthiness of technology-reliant organizations, products, and services. 

The new technology regulator of last resort might be named the Bureau of Technology Safety (BoTS). Specifically, BoTS’s mission should focus on (a) protecting the public from unsafe and untrustworthy technology products, services, and practices through enforcement, advocacy, research, and education; (b) promoting and maintaining a fair and resilient technology marketplace through sound regulation; and (c) facilitating development of a trustworthy technology innovation ecosystem in the national and public interest. 

BoTS-regulated entities should include three categories of harms/persons, including nonprofit organizations: (a) technology safety risks arising from internal control shortfalls of entities of sufficient size to map to the lowest “size of person” requirements for Hart-Scott-Rodino reporting, regardless of whether the entity in question is for-profit or nonprofit; (b) technology safety risks arising from internal control shortfalls of government contractors; and (c) technology safety risks arising from products/services distributed or marketed to the public in more than one state or sold to the public in interstate commerce. 

BoTS should be comprised of three divisions: (a) an enforcement division with robust civil and criminal fining, referral, and injunctive authority where technology safety issues place the public at risk; (b) a policy coordination/technology futures tracking, modeling, research service, and rulemaking division (subject to the Administrative Procedure Act) that (i) engages across the government to monitor enforcement trends and emerging technology safety issues in broader national, international, and historical context, and (ii) functions as a computer security and technology safety whistleblower office of last resort; and (c) a research and “pilot projects” arm that both (i) acts as a hub of in-house technical expertise for BoTS and for other agencies needing investigatory assistance in connection with their own technology enforcement and (ii) launches such experimental initiatives as technology safety may require. Pilot project initiatives, if deemed successful, would then move to the policy coordination/technology futures group for launch through APA rulemaking.

In other words, BoTS would expedite and coordinate policy response to the most complex public safety harms caused by and through technologies. Particularly as the escalating pace of AI-facilitated attacks on the general public become more socially unsettling, existing agencies will be unable to successfully scale and coordinate their technology safety efforts. BoTS would also resolve legal terminology collisions, unifying efforts of agencies with different missions and authorities, varying levels of clearances/visibility into national security, and differing expertise. Indeed, some enforcers doing important security work do not view it in those terms or recognize it as such. A supportive, coordinating, and gap-filling agency would recognize these efforts, placing them in a broader technology context.

BoTS’s structure should be modeled on CFPB’s structure as it will exist after Supreme Court review in Community Financial Services Association of America Ltd. v. Consumer Financial Protection Bureau. Assuming the Supreme Court’s blessing of (some version of) CFPB’s director structure and congressional funding mechanism, these elements can serve as the model for a new nimbly designed technology safety agency. Like CFPB’s creation story, BoTS’s creation would also arise as a cross-cutting response to a destabilizing hybrid public-private dynamic involving technical and complex products and services. In other words, the agency created to address the conditions of the Big Short offers a good structural model for a policy interruption to the risks of the Big CyberShort. 

Specifically, BoTS’s proposed mission and authorities are modeled on those of the FTC, CFPB, SEC, CFTC, FDA, CPSC, and USDA, translating them into technology safety. Its work should also be informed by lessons from the FDIC and FinCEN history—successful models of cross-cutting coordinating agencies, where financial criminality was a key motivator for insider and external attacks. In other words, CISA expansion would not accomplish the purposes of this proposal, and many of the relevant technologies and entities are not necessarily part of critical infrastructure. Similarly, the Department of Homeland Security is not the correct home for BoTS: BoTS requires an independent team that includes veteran enforcers with financial fraud, First Amendment, international consumer protection, and other expertise that goes beyond infrastructure and national security concerns. BOTS’s authority also should not preempt state enforcement authorities, and its authority as scoped eliminates the need for any safeharbor carve-outs in its enabling statute. BoTS’s core role as an interagency and public coordinator, translator, and backstop would enhance market trustworthiness and public safety in harmonized ways—ways that can effectively scale to address evolving public technology safety challenges. 

So, it’s computer security Groundhog Day. Again. We have stolen the pickup, and a groundhog is riding shotgun. This may be the last chance to avoid driving off the computer security and technology safety cliff; it’s time to turn the pickup around.

– Andrea Matwyshyn is a professor of law and Associate Dean of Innovation at Penn State Law School. Published courtesy of Lawfare

No Comments Yet

Leave a Reply

Your email address will not be published.

©2024. Homeland Security Review. Use Our Intel. All Rights Reserved. Washington, D.C.