Key Points
- States have passed AI bills such as California’s SB‑53 and Texas’ Responsible AI Governance Act.
- 38 states adopted over 100 AI‑related laws by November 2025, many with minimal developer requirements.
- Tech industry argues state regulations create a patchwork that could hinder innovation.
- House lawmakers are considering NDAA language to preempt state AI laws.
- A leaked White House executive order draft proposes an AI Litigation Task Force and national standards.
- Representative Ted Lieu is drafting a 200‑page federal AI megabill covering fraud, deepfakes, and model testing.
- Pro‑AI PACs, including Leading the Future, have launched a $10 million campaign for federal preemption.
- Over 200 lawmakers and 40 state attorneys general oppose preemptive measures, citing state autonomy.
- Industry experts claim existing consumer‑protection laws are sufficient for AI harms.
US President Donald Trump displays an executive order on artificial intelligence he signed at the “Winning the AI Race” AI Summit at the Andrew W.
Alex Bores speaking at an event in Washington, D.C., on November 17, 2025.
State‑Level Action in the Absence of a Federal Standard
Without a comprehensive federal AI safety framework, individual states have moved to protect residents from AI‑related harms. California introduced the AI safety bill SB‑53, and Texas passed the Responsible AI Governance Act, which bans intentional misuse of AI systems. By November 2025, 38 states had adopted more than 100 AI‑related laws, targeting deepfakes, transparency, and government use of AI. A recent study found that 69% of those statutes impose no requirements on AI developers.
Industry Push for a National Standard—or No Regulation
Tech giants and startups argue that the emerging patchwork of state regulations creates an “unworkable” environment that could slow innovation. Josh Vlasto, co‑founder of the pro‑AI PAC Leading the Future, warned that such fragmentation could “slow us in the race against China.” The industry, bolstered by political allies, is lobbying for a single national standard or, alternatively, for no regulation at all.
Congressional Moves to Block State Laws
House lawmakers are reportedly attempting to embed language in the National Defense Authorization Act (NDAA) that would prevent states from regulating AI. Majority Leader Steve Scalise confirmed that negotiations are focused on narrowing the scope of any preemption to possibly preserve state authority over areas such as child safety and transparency. At the same time, a leaked draft of a White House executive order proposes the creation of an “AI Litigation Task Force,” directs agencies to evaluate state statutes deemed “onerous,” and urges the FCC and FTC to develop national standards that would override state rules.
Key Players in the Federal Push
President Trump’s AI and Crypto Czar, David Sacks—co‑founder of Craft Ventures—was named in the executive order draft to co‑lead the effort to build a uniform legal framework. Sacks has publicly advocated for blocking state regulation and favoring industry self‑regulation to “maximize growth.” Representative Ted Lieu (D‑CA) and a bipartisan House AI Task Force are drafting a massive federal AI bill that would address fraud, deepfakes, whistleblower protections, compute resources for academia, and mandatory testing and disclosure for large‑language‑model companies. The proposed megabill, spanning over 200 pages, is expected to take months or years to become law.
Political Funding and Advocacy
Pro‑AI super PACs have poured significant resources into the debate. Leading the Future, backed by Andreessen Horowitz, OpenAI president Greg Brockman, Perplexivity, and Palantir co‑founder Joe Lonsdale, launched a $10 million campaign urging Congress to craft a national AI policy that would preempt state laws. Nathan Leamer, executive director of Build American AI, echoed the preemption stance, arguing that existing consumer‑protection statutes are sufficient to address AI harms.
Opposition from Lawmakers and States
More than 200 lawmakers signed an open letter opposing preemptive language in the NDAA, emphasizing that states serve as “laboratories of democracy.” Nearly 40 state attorneys general issued a similar letter. Cybersecurity expert Bruce Schneier and data scientist Nathan E. Sanders contended that the patchwork concern is overstated, noting that companies already comply with stricter EU regulations.
Legislative Landscape and Prospects
Since 2015, Representative Lieu has introduced 67 AI‑related bills to the House Science Committee, with only one becoming law. In contrast, a separate bill introduced by Senators Josh Hawley (R‑MS) and Richard Blumenthal (D‑CT) would require a government‑run evaluation program for advanced AI systems before deployment—a stricter approach than Lieu’s proposal. Lieu’s goal is to craft legislation that can pass a Republican‑controlled House, Senate, and White House, even as Majority Leader Scalise remains openly hostile to AI regulation.
Source: techcrunch.com