Lean Into AI
December 9, 2025

Trump's AI Executive Order: What Just Happened and What Comes Next

Trump’s new AI order aims to erase most state regulations, creating what the article calls a framework built to “protect innovation and reduce legal uncertainty” — but without replacing the safeguards it removes.
Lexi Reese, Lanai CEO & Co-Founder
Thought Leadership

In July, the White House chose speed over oversight in AI policy. This week, Trump confirmed the next step: a "ONE RULE" Executive Order aimed at wiping out most state-level AI laws. What's changing is relatively clear. What replaces those laws is not.

What's Actually Happening

Over the past two years, states did most of the real governing on AI. Thirty-eight states passed roughly 100 AI-related laws, covering everything from political deepfakes and algorithmic hiring discrimination to biometric and consumer privacy. The new executive order would direct the Justice Department to sue states, instruct federal agencies to treat strong state AI rules as obstacles to national policy, and threaten to withhold certain federal funds from states that refuse to roll back their own safeguards.

In other words, Washington is moving to centralize AI policy while explicitly weakening the state-level "patchwork" that had started to put real constraints on AI use in elections, hiring, and sensitive data.

What Trump and Sacks Propose Instead

Trump and his AI adviser David Sacks argue that 50 different state regimes create compliance burdens that slow innovation and risk ceding the AI race to China. They point to China's ability to rapidly scale domestic large models under a single national rulebook and warn that fragmented U.S. oversight could make it harder to match that speed.

The emerging approach is a "minimally burdensome" federal framework built around voluntary standards, industry self-regulation, and stronger liability shields for developers and deployers of AI systems. The priority is to protect innovation and reduce legal uncertainty for AI companies, not to engrave detailed user-rights and safety rules into federal law.

The Gap This Creates

The problem is less what the order removes and more what it fails to add. 

Most concrete rules on deceptive deepfakes in campaigns, including disclosure and takedown requirements, are currently at the state level; preemption without federal replacement means fewer clear, enforceable election-related AI rules in the near term.

In hiring and healthcare, states and some cities have been first movers on bias audits, transparency requirements, and notice to applicants or patients when AI is used in consequential decisions. If those rules are blocked or chilled, enforcement largely falls back on older civil-rights and consumer-protection law. Those are important, but not tailored to the specific ways AI systems can discriminate or fail. The draft framework also emphasizes reducing "legal uncertainty" for developers rather than expanding clear avenues of recourse for individuals or small businesses harmed by AI-driven decisions.⁶·⁷

What This Means for Companies

For serious multinationals, the practical effect is not a race to the regulatory bottom but a widening gap between U.S. formal law and the real compliance baseline they must meet globally.

A global life sciences company using AI for drug discovery, diagnostics, or clinical decision support now faces four overlapping realities:

United States (Federal): Federal policy is moving toward high-level documentation and general consumer- and health-law enforcement, with no detailed AI-specific rules yet for elections, hiring, or healthcare beyond existing sectoral regulations.

United States (State): In key states like California and Massachusetts, emerging AI-specific privacy and algorithmic accountability laws will be challenged or chilled, but expectations from patients, hospitals, and institutional investors for strong internal AI governance will not disappear.

European Union: Under the EU AI Act, many life-sciences systems are treated as "high-risk," requiring formal risk assessments, robust data governance, built-in human oversight, detailed technical documentation, transparency to users, and ongoing monitoring and incident reporting to regulators.

China: The company must contend with algorithm registration requirements, data localization constraints, and content and safety review obligations, especially where systems touch health information or sensitive content.

The practical result: The company will design its AI systems and internal controls to meet the strictest regime — the EU AI Act, with China-specific adaptations — then "back-map" those standards to the more permissive U.S. environment. One system, multiple compliance narratives: EU conformity assessments and technical documentation; China registration and data-handling proofs; and U.S. policies framed in terms of general consumer, health, and civil-rights law rather than AI-specific statutes.

For an emerging startup growing quickly across borders, the tradeoffs are starker. In the short run, lighter U.S. federal rules and weaker state constraints lower regulatory friction and speed up pilots with U.S. customers. But investors and global customers will still ask whether the company can pass EU AI Act scrutiny and withstand future regulatory swings in the U.S. Many will treat early EU-level compliance as a signal of seriousness and resilience, while discounting business models that only work in a narrow, temporarily permissive U.S. window.

The Strategic Question

China coordinates from the top down. Europe standardizes through a comprehensive, enforceable risk-based regime. America, for now, is betting on speed and deregulation — centralizing power in Washington while deliberately weakening state protections.

For a global life sciences major or a fast-growing startup, the rational response is to treat the EU (and, in some sectors, China) as the real rule-setters, and to treat U.S. policy as a moving political variable rather than a stable baseline.

Which model "wins" will depend not only on who ships models fastest, but on which system produces trustworthy, safe, and globally acceptable AI—and what costs are paid, by citizens and companies, along the way.

Related Posts
Lanai: The Chief Intelligence Officer
Thought Leadership
8-10 minutes
Blog post
The Rise of the Chief Intelligence Officer: A 2026 CIO Blueprint

The CIO’s role is being redefined. Discover the Chief Intelligence Officer: a new leadership model for 2026 that turns unmanaged AI into measurable enterprise value.

Thought Leadership
Blog post
AI @ Work | Open Garage Sessions: The One AI Metric That Matters with Fred Laury, Xero CIO

“You can’t govern your way to innovation: you have to create space for it.” Lanai CEO Lexi Reese talks with Xero CIO Fred Laury about what it really takes to make AI work at scale: from measuring revenue per employee to giving 4,500 people half a day to learn AI. A masterclass in balancing structure, freedom, and trust.

Thought Leadership
Blog post
The White House Has a New AI Plan. Your Company Needs One Too.

The White House wants to win the AI race by governing less. The real problem: we're already governing blind.

Thought Leadership
Blog post
AI @ Work | Open Garage Sessions: Quote-to-Cash and Beyond: CIO Karl Mosgofian’s Playbook for AI Transformation

“You can’t just tell people to use AI and expect magic.” In this Open Garage session with Lanai CEO Lexi Reese, veteran CIO Karl Mosgofian (Gainsight, Harmonic) unpacks why AI transformation is really about trust, process, and people—not tech—and how the often-misunderstood CIO role is now at the center of it all.