Overview
On March 20, 2026, the White House released its National Policy Framework for Artificial Intelligence, outlining the Trump administration's legislative priorities for federal AI governance. The Framework follows President Trump's December 11, 2025 Executive Order directing the Special Advisor for AI and Crypto and the Assistant to the President for Science and Technology to jointly prepare a legislative recommendation for a federal AI policy roadmap. Its goal is to guide Congress to a unified national approach to AI regulation that promotes US competitiveness and avoids a patchwork of inconsistent state requirements.
The release comes as lawmakers continue to advance alternative approaches to AI regulation, including sweeping legislation introduced by Senator Marsha Blackburn (R‑TN), which would impose significantly more prescriptive requirements on AI developers and deployers.
Overview of the White House National Policy Framework
The Framework sets out seven legislative objectives:
- Establish a federal policy framework that preempts burdensome state AI laws;
- Protect children and empower parents;
- Safeguard and strengthen American communities;
- Respect intellectual property rights and supporting creators;
- Prevent censorship and protect free speech;
- Enable innovation and ensure American AI dominance; and
- Educate Americans and develop an AI‑ready workforce.
Preemption
A central feature of the Framework is its call for federal preemption of state AI laws that impose burdensome requirements. Specifically, the Framework provides that states should not be permitted to regulate AI development or penalize AI developers for a third party's unlawful conduct involving their models. The Framework further provides that states should not unduly burden Americans' use of AI for activity that would be lawful if performed without AI. The Trump administration argues that preemption is necessary to "protect American rights, support innovation, and prevent a fragmented patchwork of state regulations that would hinder [] national competitiveness, while respecting federalism and State rights."
At the same time, the Framework would preserve certain areas of state authority. Laws of general applicability, child safety laws, consumer protection laws, laws related to fraud, state zoning laws for AI infrastructure, and requirements governing a state's own use of AI for procurement would not be preempted.
While the White House Framework includes broad preemption of state AI laws subject to specific carve-outs, Sen. Blackburn's recently introduced Trump America AI Act takes a more limited approach. While the Blackburn legislation does not preempt any generally applicable laws, some individual titles contain their own preemption provisions. For example, while the bill imposes a national standard on child safety, child safety laws would be preempted only to the extent they directly conflict with federal standards, and states would not be prohibited from enacting laws that provide greater protections to minors than the protections provided under the bill.
Key Policy Area Priorities
The Framework outlines a series of priority policy areas and clarifies that these issues should be addressed through existing regulatory bodies rather than a new federal AI regulator. These policy areas include:
- Child Safety: The administration calls for measures such as age‑assurance tools, parental controls, and safety features for systems likely to be accessed by minors. Notably, it cautions against vague content-based standards or open-ended liability that could increase litigation risk.
- Strengthening American Communities: The Framework links AI development to economic growth and infrastructure development, encouraging Congress to streamline federal permitting for AI data centers, enact safeguards to prevent higher residential electricity costs, and enhance efforts to combat AI-enabled fraud.
- Intellectual Property: The Trump administration defers to the courts on whether training AI models on copyrighted content constitutes fair use, while supporting voluntary licensing mechanisms and federal protections against unauthorized commercial uses of AI-generated likeness.
- Censorship and Free Speech: The administration seeks to prevent the federal government from coercing technology providers, including AI providers, to ban, compel, or alter content based on partisanship or ideology. It also asks Congress to provide effective means for redress where government censorship efforts influence AI outputs.
- American AI Dominance: The administration calls for removing regulatory barriers to innovation, expanding access to AI testing environments and federal datasets, relying on existing regulators, and avoiding creation of a new federal AI oversight body.
- AI-Ready Workforce: The Framework encourages integrating AI training into existing education and workforce programs, studying AI‑driven workforce shifts, and investing in youth development and technical assistance programs to prepare workers for an AI‑powered economy.
Senator Blackburn's legislation reflects a distinctly different regulatory approach, even as it overlaps with the administration's approach in certain areas. Sen. Blackburn's proposal would:
- Impose a statutory duty of care on AI chatbot developers;
- Expand federal and state enforcement authority;
- Create new private rights of action;
- Curb liability protections, such as Section 230;
- Mandate reporting on AI-related job displacement;
- Require bias audits, content provenance measures, and restrictions on AI companions for minors; and
- Impose registration obligations for certain foreign AI developers.
Although both the White House and Sen. Blackburn proposals prioritize child safety, creator protections, and transparency, Sen. Blackburn's bill would substantially increase compliance obligations and litigation exposure for developers. It is more prescriptive, more enforcement-oriented, and more willing to impose affirmative duties on AI companies than the White House Framework.
Implications for the Federal AI Debate
The contrasting frameworks highlight several issues likely to shape the AI policy debate in Congress, including:
- The scope of federal versus state authority, from broad preemption to narrow conflict preemption;
- Allocation of liability among developers, deployers, and users;
- The level of prescriptiveness, from voluntary standards to detailed statutory mandates;
- Obligations placed on developers and deployers surrounding duties of care and content moderation; and
- Approaches to intellectual property, data governance, and AI transparency.
Companies developing or deploying AI systems should closely track federal legislative activity as lawmakers reconcile competing proposals and consider how to balance innovation, consumer protection, and national competitiveness. Companies should also ensure governance programs, risk assessments, and compliance processes are positioned to adapt to a rapidly evolving legal landscape.
We will continue tracking regulatory developments across federal agencies and legal developments at the state level through Steptoe's AI Legislative Tracker.