AI is the New Plastics. Can We Govern it Better?

 

AI is the New Plastics. Can We Govern it Better?
A technician works at an Amazon Web Services AI data center in New Carlisle, Indiana on October 2, 2025. (Photo by Noah Berger/Getty Images via Amazon Web Services)

The paradigm-shifting nature of AI has been compared to the agricultural revolution, the industrial revolution, electricity, and computers. But a better analogy may be synthetic plastics: the manmade material that, for better and for worse, forms the backbone of modern life. Today, it is impossible to maintain an entirely plastic-free life. Although regulation attempts to mitigate the damage that plastics cause, the material’s production is increasing, as is its presence in our soil and tap water.

AI, dubbed by some as the “most transformative” technology of the century, is on a similar path of ubiquity, spanning across children’s classrooms and workspaces to military arsenals. And without regulation demarcating the line separating valuable and frivolous cases for AI inclusion, or a populace able to make informed and active decisions about which side of that line they want to be on, AI will likely continue to rapidly diffuse. However, before that happens, policymakers and legislators have a chance to manage it differently than we did with plastics—and  avoid creating something else that harms us as much as it helps.

Plastic’s Origin Story

The original problem that synthetic plastic solved was the dwindling supply of ivory that could not keep pace with the popularity of billiards. Enter celluloid billiard balls in the late 1860s. Bakelite—the first fully synthetic plastic—was introduced in 1907. This development was swiftly followed by the rise of cellophane, vinyl, and polyethylene. The March 1936 edition of Fortune Magazine ran an article devoted to this new manmade material, which was at once sanguine about its prospects (“the layman has been taught to believe that an age of plastics is at hand”) and impressed that this “child of the Depression” had increased its output so rapidly.

Plastic production quadrupled during World War II as the material found its way into everything from mortar fuses and gun turrets to the Teflon coating in the gas containers used in the Manhattan Project. Industries adapted to accommodate this demand, creating whole divisions devoted to plastic manufacturing, and even entered mainstream culture: Plastics formed the major plot point in the 1946 movie “It’s a Wonderful Life.” The boom continued in the post-war years as plastic cutlery and packaging entered the food industry.

By the time plastic bags carried everyone’s groceries in the 1970s, the material had begun to dim in cachet (compare “It’s a Wonderful Life” to “The Graduate”). Scientists started noticing plastic debris in far-flung corners of the world, first in the Sargasso Sea in 1972, then in the North Atlantic in 1986, and then the discovery of the “Great Pacific Garbage Patch” in 1996. Within a century of its invention, the material responsible for revolutionizing the packaginghealthcaretransportation, and toy industries had started to alter the environment, and we have not made a dent in reversing the impacts. Plastic particles can now be found in the depths of the Mariana Trench, at the top of Mount Everest, and deep in the human bloodstream.

Implications for AI

British entrepreneur and CEO of Microsoft AI Mustafa Suleyman observes in his book, The Coming Wave, that humans and their tools have a symbiotic relationship. According to Suleyman, we are the creators of our tools just as much as we are products of them. In the case of plastics, this has become quite literally true: microplastics have been found everywhere in our bodies, from the placenta to the brain. We cannot get away from plastics even if we wanted to.

AI is also becoming increasingly difficult to escape. Search engines like Google are implementing AI features that users cannot remove, and companies like Apple are developing phones with AI assistants that are always on by default. But more insidiously, the seemingly relentless march of AI throughout consumer products and economic sectors has generated a kind of pragmatic defeatism across businesseducation, medicine, and agriculture. For education in particular, the generational consequences could be vast. College graduates may soon lack the skills to write an essay without AI assistance, for example. The time to decide whether that matters—i.e., to craft and apply regulation that buys time—is now, before we become a “product” of AI.

Bounding AI’s myriad potential vessels will require a significant amount of foresight into the costs and benefits of using this technology. What makes AI unique from other inventions is its seemingly limitless applicability across society. The use cases for the combustion engine were narrow—you would not use a four-stroke engine to power a toothbrush. On the other hand, AI is now being implemented in everything from toothbrushes and children’s toys to deepfake generation and facial recognition technology. AI is now so pervasive that almost 65 percent of Americans do not even recognize the moments when they use it, despite avowing their fear of it. This makes it exceptionally challenging to use market forces to shape AI’s application appropriately.

Recommendations

The most direct negative output of plastic’s comprehensive diffusion throughout the economy has been pollution. Since the discovery of plastics in the Sargasso Sea, the industry has known there is no easy way to reduce plastic pollution; only a small portion gets recycled, while the vast majority ends up in landfills. To address these harms, Congress passed the Save Our Seas 2.0 Act in 2020 and directed the Environmental Protection Agency to develop a national strategy for mitigating plastic pollution, released in 2024. While comprehensive, the strategy is non-binding and simply offers “opportunities for action.” Meanwhile, there is no unifying federal-level legislation on recycling, and legislation on plastic production is a hodgepodge of targeted solutions, like the Microbead-Free Waters Act of 2015. Most actions—such as reducing or outright banning the use of plastic bags—exist at the state and local level and have been supplemented by successful public awareness campaigns. When a product like plastic is this interwoven into the fabric of life, it is extraordinarily complicated to remove a thread without rending the integrity of the whole.

The negative consequences of AI are both broader and more localized than enlarging landfills or harming wildlife. They include personal risks, from AI-induced psychosis to suicide, as well as community and climatological risks, like the amount of power and water data centers need to operate. And there is evidence to suggest that, like the makers of plastics, AI companies are also acutely aware of the potential consequences of their products.

Before these products permanently weave their way into modern life, there remains a window of opportunity to author effective regulation. The European Union provides a guide. In 2023, the European Union passed its AI Act, the world’s first set of comprehensive rules and regulations surrounding the development and application of AI. To protect EU citizens, the Act sets different rules based on AI systems’ risk levels and makes entry into the EU market contingent upon minimizing these risks. This framework provides the much-needed flexibility that allows policymakers to tailor protections for citizens while fostering innovation as much as possible—the big fear of the U.S. tech industry. This approach could, for example, simultaneously develop AI’s promising potential as a classroom tool while minimizing the technology’s risks to human cognition.

Although this approach does require rigorous enforcement, which is a weakness, a risk-based regulatory framework like this can adapt and therefore has the potential to manage the shifting, complex AI landscape in the long term. It can help drive the work that must be done to categorize risk types and gradients within each—something that would be of particular urgent use in the military domain, especially since Anthropic and the Pentagon squared off on appropriate use cases for Claude. This could serve as a valuable guide for federal-level legislation in the United States.

To boost support for such legislation, there must also be a concomitant push to increase the general population’s AI literacy, something that has bipartisan support. If two-thirds of Americans are so unfamiliar with AI that they do not recognize it when it is in their hands, then it is not reasonable for policymakers to rely on consumer-driven decisionmaking in the marketplace to mitigate harm. This will not be easy. One major issue is the intangibility of AI compared to the internet, the last major technological leap forward in our lifetimes. The internet was mostly confusing for how it worked (“a series of tubes”), but it was very clear what it could do, and taking advantage of it simply required an active choice and some software, often provided for free. Most critically, you knew when you were and were not using it—if not by the screeching modem sound, then by your need to find an internet cafe. By contrast, if you have access to the internet and a smartphone today, AI has already Trojan-horsed itself into your daily life. So, the question becomes: what does the average person need to understand about AI in order to consume it with intent—and whose job is it to provide that knowledge?

Many schools, especially high schools and colleges, are trying to increase AI literacy and proficiency, which may account for the greater proportion of people under 30 who recognize it when they see it. This, however, leaves a great proportion of the population behind; therefore, there must be a general push to educate the broader public. Public libraries are a cost-effective way to disseminate foundational information and training about how to spot and use AI tools safely and with intent. Public libraries across the country, from Chicago to Frisco to Boston, are doing this now, using AI to reinforce the community-building role libraries play. In fact, the American Library Association wrote a strategy last year for exactly this idea. Ironic as it is, a paradigm-shifting emerging technology such as AI may re-center libraries into modern life, grounding them once again in their original purpose.

Conclusion

A century and a half after the creation of the first celluloid, it is very likely that something plastic is near you—the disposable pen on your desk, the bottle of pills on your nightstand, the subway seat you are sitting on, or the AirPods in your ears. Plastics support modern life; it would take a global, whole-of-government effort to find another material to replace it, something many times greater than the original hunt for alternative ivory that got us here. We are on the cusp of an equally comprehensive evolution as AI integrates into the life enabled by plastic. When we look around our offices, bedrooms, and public transportation in the next hundred years, where will AI technology be? And will we even know it when we see it?

During plastic’s heyday, it was reasonable to assume that one day, humanity would have to find a solution to “throwaway living,” given that garbage dumps are visible to the naked eye. Finding small plastic particles in the stomachs of sea birds as early as the 1960s, however, did not make it obvious that microplastics would end up in the human brain six decades later. As AI alters our relationship with modern life and society, what problematic consequences are easy for us to foresee today? Which of them could leave future Americans with another thread woven too tightly to remove?

While time is on our side, it behooves us to identify our risk tolerance for potential consequences in advance and to systematically determine—and then regulate and govern—how we want AI to be part of our lives. We can use strategic foresight to break down systems-level challenges into composite parts, scan the horizon for drivers, signals, and trends, construct the kind of world we hope to have, and work backwards. We can proactively bound and game out our risk tolerance sector by sector, choosing where the secondary and tertiary harms of AI outweigh the benefits—and vice versa. And we can actively choose how best to take advantage of AI rather than submit to its inexorable diffusion, building a society enriched by, not dependent on, this technology. What’s more, it is our responsibility to do so. Because this time, we can’t say we didn’t see it coming.

, Published courtesy of Just Security. 

No Comments Yet

Leave a Reply

Your email address will not be published.

©2026. Homeland Security Review. Use Our Intel. All Rights Reserved. Washington, D.C.