Overview
On July 23, 2025, the White House released Winning the Race: America’s AI Action Plan (the “Action Plan”) a 28-page blueprint outlining the Trump administration’s strategy to maintain US dominance in artificial intelligence (AI). While the plan discusses innovation and infrastructure, it also places heavy emphasis on national security. These national security considerations, which range from securing AI supply chains to countering adversaries in international bodies, are sufficiently broad that they are likely to impact the entire AI industry, not just traditional defense contractors or other companies working directly on AI national security-related applications.
The Action Plan is divided into three pillars. While Pillar III, Lead in International AI Diplomacy and Security, is the most directly related to national security, Pillars I and II also dedicate significant discussion to national security issues. Indeed, if there is one unifying theme throughout the Action Plan, it may be the focus on national security issues and the need to compete with foreign adversaries. The first page of the Action Plan begins with a quote from President Trump noting, “it is a national security imperative for the United States to achieve and maintain unquestioned and unchallenged global technological dominance.”
The Action Plan makes clear the Trump administration intends to continue its pro-growth, light touch approach when it comes to regulating AI. It notes, for instance, that the administration will “dismantle unnecessary regulatory barriers that hinder the private sector.” However, the Action Plan also makes clear the Trump administration is considering new regulations to further enhance US national security as well as ways to enforce existing rules more robustly.
Below, we summarize the Action Plan’s national security and related sections, and provide analysis and insights on what they mean for businesses across the AI ecosystem.
AI Safety By Another Name?
The Trump administration has repeatedly made clear it disapproves of the concept of “AI safety” and, indeed, the phrase does not appear anywhere in the Action Plan. When Vice President Vance traveled to the Paris AI Action Summit earlier this year, he told attendees, “I'm not here this morning to talk about AI safety. . .I'm here to talk about AI opportunity.” Following his speech, the administration renamed the US “AI Safety Institute” the “Center for AI Standards and Innovation” (CAISI).
The Action Plan, however, includes language and seeming nods to AI safety, linking the issue closely to national security. For example, the Action Plan highlights the need to “monitor for emerging and unforeseen risks from AI” and dedicates sections to concepts such as AI interpretability, AI control, model evaluation, risks relating to cyberattacks, and risks related to chemical, biological, radiological, nuclear, or explosives (or “CBRNE”). All of these concepts will be familiar to experts from the AI safety world and suggest the Trump administration is indeed concerned by a range of AI safety issues, even if it does not use that phrase. At this point, most of the safety-related concepts in the Action Plan appear to be in the form of guidance, standards, and voluntary evaluations—not binding regulations. However, the focus on the issue is notable and, as detailed below, it is likely that there will be at least some new national security-related regulations, which may address several of these safety-related issues from a national security lens.
Frontier Model Evaluation
The Action Plan calls for a buildout of an “AI Evaluation Ecosystem,” to focus on assessing the performance and reliability of AI systems. The Action Plan specifically calls for “national security-related AI evaluations through collaboration between CAISI at [the Department of Commerce], national security agencies, and relevant research institutions.” Among other areas, the plan suggests that such evaluations may focus on biosecurity, deepfakes, critical infrastructure vulnerabilities, malign foreign influence, and the adoption and use of foreign AI systems in the United States. The Action Plan also indicates that CAISI should “conduct research and, as appropriate, publish evaluations of frontier models from the People’s Republic of China for alignment with Chinese Communist Party talking points and censorship.”
While these evaluations are unlikely to include new regulatory requirements in the first instance (indeed, CAISI is not empowered to publish regulations), they could form the impetus for eventual regulations, particularly if they highlight risks that could be addressed by existing regulatory regimes. The Action Plan foreshadows such a possibility by stating, “[o]ver time, regulators should explore the use of evaluations in their application of existing law to AI systems.”
Protection of US AI Facilities and Intellectual Property
The Action Plan focuses heavily on the need to build out US AI infrastructure, including semiconductor manufacturing capacity, energy generation and transmission, and data centers. While much of that discussion is focused on issues such as environmental permitting and similar regulatory considerations, the Action Plan makes clear that national security considerations must also be at the forefront of such a buildout. For example, the Action Plan indicates that AI infrastructure must be “free from foreign adversary” software and hardware and suggests the possibility of new regulations for semiconductor manufacturing facilities and data centers to prohibit the use of certain foreign adversary items within those facilities. Such rules would likely come from the Department of Commerce’s Bureau of Industry and Security, via its Information and Communications Technology and Services (ICTS) rules.
The Action Plan also calls for the development of “high-security data centers for military and intelligence community use,” which would likely have a variety of additional security requirements with respect to supply chain and related matters.
With respect to theft of intellectual property and other trade secrets, the Action Plan states the US must “effectively address security risks to American AI companies, talent, intellectual property, and systems.” This will include public-private collaboration to “enable the private sector to actively protect AI innovations from security risks, including malicious cyber actors, insider threats, and others.” This comes on the heels of reporting about and criminal charges against individuals that allegedly sought to steal AI secrets from US companies.
Participation in International AI Bodies
In a somewhat surprising shift, the administration indicated it plans to actively participate in AI policy and governance discussions at international organizations such as the United Nations, the Organisation for Economic Co-operation and Development, G7, G20, International Telecommunication Union, Internet Corporation for Assigned Names and Numbers, and others. The Trump administration has typically taken a less multilateral approach to many international policy issues, preferring a bilateral or American-led approach. There is some flavor of this in the Action Plan with references to international AI bodies that “have advocated for burdensome regulations, vague ‘codes of conduct’ that promote cultural agendas that do not align with American values. . .” The Action Plan, however, makes clear the administration’s view that, if it does not actively participate in such bodies, China may fill that void. Therefore, the Action Plan calls for robust US participation in order to “counter Chinese influence.” It is unclear whether the Trump administration will actively seek to drive new standards or agenda items within these bodies or merely have a seat at the table to prevent proposals and other measures coming from China.
Significant Win for Open-Source
One of the most significant questions with respect to the Trump administration’s AI policy was how it would approach open-source and, in particular, open-weight models. While many US companies use closed models, there are some US companies that have focused heavily on open models and centered their business plans around such models. This is by way of contrast to China, where there is a much stronger culture of releasing open-weight models to include the biggest frontier models released in the past six months, from both Chinese startups and existing tech giants. Open models are widely used in academia, as well as by startups and the broader research community, which have increasingly favored these Chinese models for this very reason.
Many observers in the US AI national security community have expressed significant concern with open-weight AI models, arguing that such models help US adversaries close the gap with respect to frontier models. Therefore, there was some question as to where the administration would land on such models.
The Action Plan makes clear that the administration believes these models have “unique value” for startups, government, and academia, and wants to ensure “America has leading open models founded on American values.” This goal seems particularly prescient given that the Action Plan was issued shortly after the release of new Chinese models that are not just open-weight, but appear to have significantly exceeded previous benchmarks in coding and translation tasks, making them particularly attractive to startups looking to incorporate AI into cutting edge products and services.
New Export Controls and Increased Enforcement
The Trump administration recently moved to revoke the Biden-era AI Diffusion Rule, which sought to enhance export controls on leading AI chips and model weights, among other measures. The Trump administration indicated it would be replacing that rule with a less burdensome and more targeted approach. The Action Plan does not provide detail about any new replacement rule but does forecast some potential new controls.
Notably, the Action Plan suggests the administration will seek to enhance export controls with respect to semiconductor manufacturing equipment and various subsystems and components of such equipment. Current US export controls apply to the direct product of a foreign plant or “major equipment” utilizing certain US technology or software for the manufacture of semiconductors, but now it appears export controls could extend to the direct product of semiconductor manufacturing subsystems (or minor equipment).
The Action Plan will also push other countries to adopt export controls rules that are similar to those imposed by the United States. Where partner countries decline to do so and provide access to AI technology to US adversaries, the administration will be prepared to take additional action via measures such as the “Foreign Direct Product Rule and secondary tariffs” in order to “achieve greater international alignment.” It is unclear precisely what activities the administration may find problematic that would trigger such measures. For example, the administration could focus on indirect third-country access to AI systems or outputs and applications that the US government finds concerning, or on the provision of semiconductor manufacturing equipment and related parts and components. The Action Plan also calls for new initiatives to promote “plurilateral controls for the AI tech stack” and suggests the administration will avoid getting bogged down in the “multilateral treaty bodies,” which can often move slowly to adopt new controls. These statements indicate a more unilateral or bilateral US government approach to export controls for the AI stack as compared to international regimes. (The Wassenaar Arrangement in particular has ground to a halt following Russia’s invasion of Ukraine, due to Russia’s participation in the forum and the need for consensus on new measures.)
The Trump administration will also look to drive more uniformity in national security rules outside of export controls, including measures to prevent US adversaries from supplying the defense-industrial base of allies or acquiring controlling stakes in defense suppliers of allies.
The Action Plan also calls for the use of new technology measures that can help with enforcement of export controls. In particular, it recommends “leveraging new and existing location verification features on advanced AI compute to ensure that the chips are not in countries of concern.” It also directs the Department of Commerce (DOC) to work closely with US intelligence community resources on export controls enforcement matters, particularly where DOC has no representative located in a foreign country. It is unclear how information from the intelligence community would be used against targets or respondents in such enforcement cases traditionally led by DOC and the Bureau of Industry and Security.
Although the export controls section of the Action Plan was relatively high-level and includes only broad policy recommendations for consideration by constituent agencies, we anticipate this is an area where: (i) the administration could issue new rules relatively quickly and (ii) new measures will not be limited to voluntary guidelines or standards, but will be backed with potential civil and criminal penalties for non-compliance, thus potentially complicating the core goal to promote export of the full US AI stack, discussed next.
A New Pro-Export Focus
The Action Plan also appears to weigh in on a long-running debate in the AI security community with respect to whether the US should prevent key AI technologies from being sold abroad or whether prohibiting such sales will result in other countries adopting technologies from US adversaries in a manner that ultimately harms US national security. The Action Plan appears to come down on the side of promoting exportation abroad as a means to ensure that comprehensive US AI technology forms the basis of the global AI ecosystem, some of which is already located outside the United States but will benefit from end-to-end AI lifecycle planning.
In particular, it calls for exporting “the full AI technology stack—hardware, models, software, applications, and standards—to all countries willing to join America’s AI alliance.” It notes that such exports will prevent countries from “turn[ing] to our rivals.” In particular, it calls for “industry consortia” to submit proposals for “full-stack AI export packages” to DOC. Commerce will then select certain consortia and seek to facilitate deals provided they meet approved “security requirements and standards.” President Trump also issued an executive order, Promoting the Export of the American AI Technology Stack, which directs DOC, in consultation with other US agencies, to establish the “American AI Exports Program” to implement this measure within 90 days, and also take steps for proposal submitters to secure US federal financing for approved consortia.
The Action Plan also follows a reversal by President Trump allowing the export of certain less-advanced AI chips to China, which may have been similarly motivated by a desire to ensure US chips form the basis of the global AI ecosystem, but has garnered scrutiny by Congress.
Taken together, this pro-export focus and desire for the global AI system to be built on US infrastructure may mark a paradigm shift in how the US thinks about the export of AI-related technologies and, at a minimum, appears to be a departure from the Biden Administration, which focused more heavily on protecting the US lead in critical technologies by preventing them from flowing to various countries. It remains to be seen whether this outlook will impact the replacement of the AI Diffusion Rule, but that is an area worth watching. Notably, partner countries will want to understand clearly whether US export controls or other measures will impose trade restrictions on AI use cases and outputs that are determined to be threatening to US national security, foreign policy, or economic objectives, and evaluate if they will affect the value of procuring the full US AI tech stack.
Cybersecurity and Related Measures
The Action Plan highlights a number of cybersecurity and related measures that it believes the US must enhance. It dedicates a section to bolstering critical infrastructure cybersecurity to protect against threats from AI and to incorporate AI to bolster defense. It also highlights the need for “secure-by-design” AI technologies and applications to combat potential risks from foreign adversaries, including data poisoning and privacy attacks. These measures are likely to become increasingly important for Trump administration officials as Chinese models advance and more companies may look to build on top of those models for various applications or to distill such models or use research methodologies or datasets derived from China.
Focus on Government Use for National Security Applications
The Action Plan focuses in significant part on accelerating AI adoption in the federal government, especially the Department of Defense (DoD). The plan asserts that the United States must “aggressively adopt AI within its Armed Forces” to maintain military preeminence. This includes integrating AI into warfighting and back-office operations, and ensuring military AI systems are secure and reliable. Among other measures, this includes establishing an “AI & Autonomous Systems Virtual Proving Ground,” enhancing AI education and training among US officials, ensuring appropriate government access to private sector resources, and automating workflows within DoD, where appropriate.
Biosecurity
In the biotechnology arena, the Action Plan explains that AI could be misused to create novel pathogens and transmission pathways, modify compounds to become more dangerous or virulent, or bypass traditional biodefenses. When combined with the rapid rise of the ability to edit nucleic acids at low cost through a technology known as “CRISPR,” AI presents a new level of urgency for the government to advance biodefense capabilities that it has been expanding since the 2009 flu pandemic and most recently in response to COVID.
As part of this existing effort, the plan proposes a multi-tiered biosecurity approach focused on nucleic acid (e.g., DNA and RNA) synthesis screening. Specifically, it seeks to require all institutions that receive federal research funding to use nucleic acid synthesis providers or tools that have “robust nucleic acid sequence screening and customer verification procedures.” This would help ensure that if a researcher (or someone posing as one) tries to order nucleic acid sequences that could assemble a dangerous virus or toxin, the order is flagged or blocked. Importantly, the Action Plan calls for the creation of “enforcement mechanisms for this requirement rather than relying on voluntary attestation.” It also calls for the Office of Science and Technology Policy to convene industry and government to develop a system for nucleic acid providers to share data on suspicious requests in real time.
It is not clear yet how the administration plans to handle the many nucleic acid molecule providers that are located outside the United States. Also, it is not clear whether the administration’s new framework will apply to both the private and government work of federal funding recipients or to government work alone. Similarly, it is not clear whether it will apply across the government or to specific funding agencies, such as the National Institutes of Health (NIH). The private and government approach is already taken by NIH’s Guidelines for Research Involving Recombinant or Synthetic Acid Molecules, which imposes heightened safety requirements on NIH-supported research involving nucleic acids. However, in practice, this policy and similar policies like those covering dual use research of concern are not always followed. Also, if the new framework is limited to government-funded research, experience with the NIH policy shows that a wide range of activities will not be captured for life sciences companies that only sell finished products to the government.
Development of AI Incident Response Capacity
The Action Plan calls for the development of a federal AI incident response capacity, built into existing incident response doctrines and best practices. The incident response capacity should help ensure that the “impacts to critical services or infrastructure are minimized” in event of an AI system failure. Among other measures, the report calls on CAISI to “partner with the AI and cybersecurity industries to ensure AI is included in the establishment of standards, response frameworks, best practices, and technical capabilities (e.g., fly-away kits) of incident response teams.”
***
For additional information regarding the Action Plan please contact a member of our National Security or AI Practices.