The AI Presidency: What “America First” Means for Global AI Governance

The AI Presidency: What “America First” Means for Global AI Governance

President-elect Donald Trump’s imminent return to the White House is set to transform the global AI landscape with Silicon Valley’s tech titans at the helm. In the past week alone, Trump reportedly has met with a procession of industry leaders, receiving $1 million donations to his inaugural fund from Meta, Amazon, and OpenAI CEO Sam Altman. “President Trump will lead our country into the age of AI,” said Altman, capturing the zeitgeist of the moment. “I am eager to support his efforts to ensure America stays ahead.”

Silicon Valley’s turn toward Trump coincides with the recent nomination of tech executives to key positions in the incoming administration. In early December, Trump announced he would appoint former PayPal COO David Sacks as “White House AI and Crypto Czar”—a new position aimed at “making America the global leader in both areas”—and Palantir Technologies’ Jacob Helberg as Under Secretary of State for Economic Growth, Energy, and the Environment. These picks signal a shift toward laissez-faire regulation and an increased focus on winning the AI race with China. With America leading the world in tech, talent, and computing power, Trump and his Silicon Valley backers have significant leverage to reshape—or outright reject—emerging rules of the road for global AI governance.

But while America is embracing AI innovation, the world is bracing for impact. Trump 2.0 has sparked fears of mercantilism, where U.S. technological primacy and access to global markets usurp multilateral cooperation. His plan to “Make America First in AI,” including by expanding Biden-era export controls against China, is expected by many to accelerate an AI arms race that is already underway. As well as competing with China, Trump may turn his gaze to Europe, where America’s tech giants have long chafed at strict regulations on AI, content moderation, and data privacy. The coming AI presidency will demand careful preparation—not only to adapt to potential changes in U.S. policy but also to safeguard international collaboration on AI governance.

The Illusion of a Hard Reset

To prepare for the coming AI presidency, it is crucial to look to the past. The Biden administration has made progress on domestic and international frameworks for AI governance. The October 2023 Executive Order on AI called for the United States to “lead the way” in promoting “responsible AI safety and security principles and actions with other nations, including our competitors.” With allies, the Biden administration engaged with the European Union on AI through the Trade and Technology Council, played a leading role in the U.K.-sponsored Global AI Safety Summit, and convened the inaugural meeting of the International Network of AI Safety Institutes, among other initiatives. The Biden administration also held formal bilateral talks with China on AI governance for the first time, with President Joe Biden and Chinese President Xi Jinping agreeing to maintain human control of AI in the nuclear domain. Biden and Xi reiterated this agreement at a meeting in November, while stressing “the need to consider carefully the potential risks and develop AI technology in the military field in a prudent and responsible manner.”

Even in highly contested spaces such as defense and security, the Biden administration has led the way in developing multilateral frameworks for the governance of AI. The United States drafted the 2023 Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, a non-legally binding agreement endorsed by over 50 countries that establishes norms for the development and deployment of AI systems in the military domain. Those countries held working group meetings on military AI and urged international partners to implement the Political Declaration at the second global summit for Responsible AI in the Military Domain in September. In October, the Biden administration also publicly released its landmark National Security Memorandum (NSM on AI), the result of a year-long process directed by the 2023 executive order to review the integration of AI into national security systems. The NSM directs U.S. national security agencies to “collaborate with allies and partners to establish a stable, responsible, and rights-respecting governance framework.”

Under Trump, these nascent efforts face an uncertain future. During the campaign, Trump promised to repeal Biden’s executive order on AI and loosen perceived restrictions on free speech. Still, change may be more evolutionary than revolutionary. When Trump takes office, his administration will inherit a complex web of AI governance frameworks, policies, and regulations spanning hundreds of U.S. government departments and agencies—from the Office of Management and Budget to the Department of Defense and the Intelligence Community. Over time, federal agencies have developed institutional processes, norms, and policy guidance that, while not enshrined in law, will not be easily dismantled. Many existing rules also reflect more than a decade of bipartisan support for government spending in AI research and development, workforce initiatives, and efforts to leverage technology to protect U.S. national security.

It should also be recalled that Trump’s first administration laid the groundwork for current AI governance frameworks, issuing executive orders in 2019 on “Maintaining American Leadership in Artificial Intelligence” and in 2020 on “Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government.” Both executive orders emphasized the need for safe, responsible, and transparent AI development and deployment, in accordance with U.S. laws and values. Key policy frameworks concerning the use of AI in the defense and intelligence communities also emerged during the first Trump presidency, including the Pentagon’s “Ethical Principles for Artificial Intelligence” and the Office of the Director of National Intelligence’s “Principles of Artificial Intelligence Ethics for the Intelligence Community.” The Biden administration built on these Trump-era frameworks, and Trump is unlikely to abandon them altogether.

Far from loosening all Biden-era restrictions, the Trump administration is likely to adopt a sector-specific approach to regulation focused narrowly on AI applications, instead of top-down regulation to mitigate potential risks. For example, Trump is expected to give tech companies greater latitude to self-regulate and experiment with AI capabilities. This latitude could, in turn, translate into greater emphasis on regulating the deployment (rather than the development of) AI tools in certain sectors. As venture capitalist Marc Andreessen put it, Trump has signaled to Silicon Valley that “it’s time to build.” If history is any guide, the Trump administration is also likely to prioritize streamlining AI frameworks, eliminating burdensome reporting requirements, and relying more heavily on the application of existing laws to AI as opposed to creating new laws or norms.

From Domestic to Global Governance

How U.S. domestic policy changes play out on the global stage remains to be seen. During its first term, the Trump administration overcame its initial reluctance about joining the Global Partnership on AI—launched in June 2020 to promote intergovernmental cooperation on AI—only after deciding the initiative would help reduce China’s influence. This time around, Trump is expected to take an even harder line on China when it comes to AI. In addition to expanding Biden-era export controls on AI-related technologies, the new administration is expected to close the gaps in earlier rules that allowed China to circumvent them. The upcoming AI Summit for Action in Paris on Feb. 10, which both the United States and China are expected to attend, will serve as a litmus test for whether the brand new administration will pursue limited cooperation with China on AI governance alongside these measures.

On military AI, reports of China using open source AI models for military and intelligence purposes are likely to further fuel an AI arms race. The Trump administration is poised to expand the use of AI in national security, with more funding for tech companies to develop classified versions of commercially-available tools. Trump confidant and X CEO Elon Musk, who will lead the newly created Department of Government Efficiency, has set his sights on disrupting traditional tech procurement processes in the Pentagon, potentially favoring smaller companies such as Anduril and Palantir. Even OpenAI, which until recently declined to work with the U.S. military, announced on Dec. 4 that it would partner with defense contractor Anduril to integrate AI into the Pentagon’s counterdrone systems.

Trump’s recently announced nominations for his Cabinet and other top posts support this trend, as both Sacks and Helberg have long advocated for and helped bring about the greater integration of AI tools into military and national security systems. Helberg, in particular, has described AI and other technologies as putting the United States in a “Grey War” with China, Russia, Iran, and other States—a war that only Silicon Valley can help win. But this view, too, is not novel. For decades, it has been clear that Silicon Valley, not Washington, will dictate the ways in which the AI race is run. Now, U.S. allies will have to contend with both.

America First in AI or America Alone?

The United States is already on a collision course with its transatlantic partners when it comes to tech policy. Trump 2.0 has been a wake-up call for Europe, as the European Union seeks to promote its own standards for AI governance through the AI Act, Digital Services Act (DSA), and beyond. In the coming year, EU regulators are on track to confront key Trump allies including Elon Musk, whose X platform has been found in non-compliance with the DSA and faces steep fines of up to 6 percent of the company’s global revenue. Vice President-elect JD Vance previously has threatened to halt U.S. funding to NATO if the European Union imposes these fines.

Meanwhile, the United Kingdom—a major AI player with privileged access to U.S. technology and intelligence—is also concerned about overreliance on American tech companies. Even under the Biden administration, U.K. government advisors were cognizant of the fact that the United States pledged “equitable,” but not equal, access to the benefits of AI. Trump’s victory has compounded these concerns. On Dec. 13, U.K. Technology Secretary Peter Kyle said the Trump administration would pursue a “starkly” different approach on regulation, one that may jeopardize international collaboration via the AI safety institutes, as the Trump team decides whether to scrap the U.S. AI Safety Institute erected under Biden’s executive order.

If the United States retreats further into protectionism or goes its own way, Europe may be forced to pick up the mantle of global AI governance. But with less capital, talent, and technological capacity, Europe wields far less influence than the United States. Both the European Union and the United Kingdom lag significantly behind the United States in AI development, attracting only $9.5 billion in AI investment last year compared to approximately $66 billion in the United States. In terms of compute infrastructure, the United States has built more data center capacity in the past year than the entire world combined, excluding China. And the United States does not always abide by the same rules as the European Union, with big tech companies such as Meta and X refusing to release certain AI products in Europe due to strict regulations. All of this has reduced the “Brussels effect” of EU-produced AI legislation.

Yet one country stands to benefit from the growing transatlantic tech divide: China. Through its Global AI Governance Initiative, Beijing has advanced a vision rooted in state control and technological sovereignty. It is actively promoting this approach in international fora including the Group of 77 at the United Nations, as well as through multilateral alliances such as BRICS (Brazil, Russia, India, China, and South Africa). At the same time, China through its Digital Silk Road has invested heavily in infrastructure projects in the Global South, including fiber optics, surveillance technologies, and AI-powered public-sector tools. By exporting technologies embedded with its standards, China is fostering long-term dependencies and promulgating its vision for AI governance. This vision is increasingly resonating with autocracies in the Middle East and North Africa that worry about “algorithmic colonization” and exclusion from western-dominated frameworks.

Technological supremacy will not win the AI race. Unbridled protectionism or unilateralism risks exacerbating regulatory fragmentation, weakening transatlantic partnerships, and alienating the Global South. It also risks creating misaligned incentives or interoperability gaps with U.S. allies and partners, undermining collective security. In the long term, these forces will only serve to diminish the United States’ ability to counter China. Absent strategic engagement on the future of global AI governance, “America First” may soon become “America Alone.”

 – , is a Senior Fellow at Just Security and a Strategy and Policy Fellow at Oxford University’s Blavatnik School of Government. She previously served for a decade in the U.S. government, including at the White House National Security Council and Office of the Vice President.

No Comments Yet

Leave a Reply

Your email address will not be published.

©2024. Homeland Security Review. Use Our Intel. All Rights Reserved. Washington, D.C.