✨ Discover this trending post from TechCrunch 📖
📂 Category: AI,Government & Policy,AI policy,AI moratorium,leading the future,ndaa
💡 Here’s what you’ll learn:
For the first time, Washington is close to making a decision on how to regulate artificial intelligence. The battle that is brewing is not about technology, but rather about who is responsible for organizing.
In the absence of a meaningful federal AI standard focused on consumer safety, states have introduced dozens of bills to protect residents from AI-related harms, including California’s AI Safety Bill SB-53 and Texas’ Responsible AI Governance Act, which prohibits intentional misuse of AI systems.
Tech giants and emerging Silicon Valley startups claim that such laws create an unworkable patchwork that threatens innovation.
“It will slow us down in the race against China,” Josh Vlasto, co-founder of the pro-AI PAC Leading the Future, told TechCrunch.
The industry, and many of its White House transplants, are pushing for a national standard or none at all. In the trenches of the all-or-nothing battle, new efforts have emerged to prevent countries from enacting their own AI legislation.
House lawmakers are reportedly trying to use the National Defense Authorization Act (NDAA) to block state AI laws. Meanwhile, a leaked draft of the White House executive order also shows strong support for pre-empting state efforts to regulate AI.
A sweeping preemption that would strip states of their rights to regulate AI is unpopular in Congress, which voted overwhelmingly against a similar resolution earlier this year. Lawmakers have argued that without a federal standard, banning states would leave consumers vulnerable to harm, and tech companies would be free to operate without oversight.
TechCrunch event
San Francisco
|
October 13-15, 2026
To create this national standard, Rep. Ted Lieu (D-CA) and the bipartisan House AI Task Force are preparing a package of federal AI bills covering a range of consumer protections, including health care fraud, transparency, child safety, and catastrophic risk. Such a project would likely take months, if not years, to become law, highlighting why the current push to limit state power is one of the most contentious battles in AI policy.
Battle Lines: NDAA and EO

Efforts to prevent countries from regulating artificial intelligence have intensified in recent weeks.
The House has considered amending language in the National Defense Authorization Act that would prevent states from regulating AI, Majority Leader Steve Scalise (R-LA) told Punchbowl News. Politico reported that Congress was working to finalize an agreement on the defense bill before Thanksgiving. A source familiar with the matter told TechCrunch that negotiations focused on narrowing the scope to preserve state authority in areas such as child safety and transparency.
Meanwhile, a leaked draft from the White House reveals a potential preemptive strategy for the administration. The executive office, which has reportedly been put on hold, would create an “AI Litigation Task Force” to challenge AI laws in court, direct agencies to evaluate state laws deemed “onerous,” and push the Federal Communications Commission and Federal Trade Commission toward national standards that go beyond state rules.
Notably, the Ethics Office will give David Sachs – Trump’s director of artificial intelligence and cryptocurrencies and co-founder of venture capital firm Craft Ventures – joint authority in creating a unified legal framework. This would give Sachs direct influence over AI policy that replaces the typical role of the White House Office of Science and Technology Policy and its head, Michael Kratsios.
Sachs has publicly called for hindering state regulation and keeping federal oversight low, favoring industry self-regulation in order to “maximize growth.”
The patchwork argument
Sachs’ position reflects the view of much of the AI industry. Several major pro-AI political action committees have emerged in recent months, spending hundreds of millions of dollars in local and state elections to oppose candidates who support AI regulation.
Leading the Future — with support from Andreessen Horowitz, OpenAI CEO Greg Brockman, Perplexity, and Palantir co-founder Joe Lonsdale — has raised more than $100 million. This week, Leadership the Future launched a $10 million campaign to push Congress to craft a national AI policy that goes beyond state laws.
“When you’re trying to drive innovation in the tech sector, you can’t have a situation where all these laws keep coming from people who don’t necessarily have the technical expertise,” Vlasto told TechCrunch.
He said a patchwork of government regulations “will slow us down in the race against China.”
Nathan Limmer, executive director of Build American AI, the PAC’s advocacy arm, emphasized that the group supports preemption without federal consumer protections specific to AI. Existing laws, such as those dealing with fraud or product liability, are sufficient to deal with AI harms, Limmer said. While state laws often seek to prevent problems before they arise, Limmer prefers a more reactive approach: allowing companies to move quickly, and address problems in court later.
There is no pre-emption without representation

Alex Burris, a New York Assembly member running for Congress, is one of the top targets for Future Leadership. He sponsored the RAISE Act, which requires large AI labs to have safety plans to prevent serious harm.
“I believe in the power of AI, which is why it’s so important to have sensible regulations,” Boris told TechCrunch. “Ultimately, the AI that wins in the market will be trustworthy AI, and the market often undervalues or creates weak short-term incentives to invest in safety.”
Boris supports a national AI policy, but believes countries can move faster to address emerging risks.
It is true that countries are moving faster.
As of November 2025, 38 states have adopted more than 100 AI-related laws this year, primarily targeting deepfakes, transparency and disclosure, and government use of AI. (A recent study found that 69% of these laws impose no requirements on AI developers at all.)
Activity in Congress provides further evidence for the “slower than the states” argument. Hundreds of AI bills have been introduced, but only a few have passed. Since 2015, Rep. Liu has submitted 67 bills to the House Science Committee. Only one became law.
More than 200 lawmakers have signed an open letter opposing the preemptive strike doctrine in the National Defense Authorization Act, arguing that “states serve as laboratories for democracies” that must “retain the flexibility needed to meet new digital challenges as they emerge.” Nearly 40 state attorneys general also sent an open letter opposing the state’s AI regulation ban.
Cybersecurity expert Bruce Schneier and data scientist Nathan E. Sanders – authors Remaking democracy: How artificial intelligence will change our politics, government, and citizenship He argues that the patchwork complaint is exaggerated.
They point out that AI companies already adhere to stricter EU regulations, and most industries are finding a way to operate under various state laws. The real motive, they say, is to avoid accountability.
What might a federal standard look like?
Liu is drafting a massive project of more than 200 pages that he hopes to submit in December. It covers a range of issues, such as fraud penalties, deepfakes protection, whistleblower protection, computing resources for academia, and mandatory testing and disclosure for large language model companies.
This last provision requires AI labs to test their models and publish the results – something most of them now do voluntarily. Liu has not yet introduced the bill, but he said it does not direct any federal agencies to review AI models directly. This differs from a similar bill introduced by Senators Josh Hawley (R-MS) and Richard Blumenthal (D-CN), which would require a government-run evaluation program for advanced AI systems before they can be deployed.
Liu acknowledged that his bill would not be as strict, but said he had a better chance of turning it into law.
“My goal is to get something into law this semester,” Liu said, noting that House Majority Leader Scalise is openly hostile to regulating artificial intelligence. “I’m not writing a bill that I would have if I were king. I’m trying to write a bill that can pass the Republican-controlled House, the Republican-controlled Senate, and the Republican-controlled White House.”
💬 What do you think?
#️⃣ #race #regulate #artificial #intelligence #sparked #showdown #federal #government #states
🕒 Posted on 1764345608
