Politics

/

ArcaMax

State politics color reception to Trump's AI 'framework' order

Allison Mollenkamp, CQ-Roll Call on

Published in Political News

WASHINGTON — The Trump administration plans to sue states for their artificial intelligence laws, but how the push is affecting work on future legislation depends on a state’s politics.

President Donald Trump issued an executive order in December that directs the Department of Justice to sue states with AI laws the administration deems “burdensome” to the industry, including on interstate commerce grounds. It came amid unsuccessful attempts in Congress to move a White House-backed moratorium on states’ AI regulations.

Supporters of AI regulation, including in Colorado, California and Texas, say their states stand by going ahead on AI while Congress and the executive branch take their time, but lawmakers aligned with the administration may be more likely to take threats to sue or withhold funding more seriously.

Cody Venzke, a senior policy counsel for the American Civil Liberties Union, called the plan “a hodgepodge of . . . faulty legal theories.”

“I don’t think that, for the most part, it will be effective in stopping states from regulating artificial intelligence to protect their citizens.”

Under the Dec. 11 executive order, the Commerce Department and David O. Sacks, White House special adviser for AI and crypto, were tasked with publishing within 90 days “an evaluation of existing state AI laws that identifies onerous laws.”

Once states are identified by the administration as having onerous laws, they could face lawsuits and withholding of certain federal funding, including grants under the $42.5 billion Broadband Equity, Access and Deployment program, meant to increase access to high-speed internet.

“The federal government is limited in the way that it can unilaterally change the conditions on federal grants to states,” Venzke said, as programs like BEAD were established by Congress.

Venzke also questioned the authority of the Federal Communications Commission and Federal Trade Commission, both involved in the executive order, to regulate AI or preempt states.

Utah bill

The administration is already trying to wield informal influence over state policies. Last week, the White House Office of Intergovernmental Affairs sent a memo, obtained by CQ Roll Call, in opposition to a Utah bill that would require developers of large frontier AI models to publish public safety and child protection plans. The memo was first reported by Axios.

“We are categorically opposed to Utah HB 286 and view it as an unfixable bill that goes against the Administration’s AI Agenda,” the memo said. The one-page statement did not offer legal justifications for its opposition.

The White House did not immediately respond to a request for comment.

The Utah bill, which passed out of a state House committee late last month, would also require risk assessments for frontier models, require developers to report certain safety incidents to the state government and establish whistleblower protections for employees of large frontier developers.

The bill’s sponsor, Republican state Rep. Doug Fiefia, expressed his opposition to the executive order in a post on TikTok in December.

“This executive order goes too far. I support the idea of a national AI framework, but it should come through Congress, where there’s transparency, debate, collaboration. That’s how you build trust and lasting policy. Don’t forget about states’ rights and the 10th Amendment. And until that happens, states should be allowed to protect their people,” Fiefa said.

Even before that move by the White House, legal uncertainty led some states to stay the course on their existing laws and possible future legislation.

Colorado’s AI Act is currently scheduled to go into effect this summer. A working group established prior to the executive order is negotiating updates to the law.

As it currently stands, the measure would require developers and deployers of “high-risk” AI systems — those making consequential decisions — to exercise “reasonable care” to protect users from risks of discrimination, which it defines based on impact rather than intent.

In the executive order, Trump used Colorado’s law and the “disparate impact” standard as an example of state laws “increasingly responsible for requiring entities to embed ideological bias within models.”

Loren Furman is CEO of the Colorado Chamber of Commerce and represents industry in the working group negotiations. She said she hasn’t seen “a lot of attention paid” to the Trump administration order.

“When you’re in a state like ours, and you have . . . a majority of Democrats that are, you know, that are in control, I think they . . . feel like the legislature is gonna make the decisions and move forward,” Furman said.

Furman also said that she expects at least two other AI-focused bills will be filed in Colorado this session, including possibly focused on health care.

The executive order lays out a pathway for states to avoid conflict with the administration, including for access to discretionary grant funding. States could access funding by “entering into a binding agreement with the relevant agency not to enforce any such laws during the performance period” of a grant.

But Furman expects that if the federal government sued Colorado over its law, state Attorney General Phil Weiser, who is also running for governor, would continue his opposition to the administration.

“Attorney General Weiser has been filing lawsuits almost daily, so I certainly expect that would be the case,” Furman said.

Colorado isn’t the only state considering new AI legislation this session. In New York, one bill would require disclaimers on AI-generated news content. Another would impose a moratorium of at least three years on permits for new data centers.

So far this year, lawmakers in Florida, Washington, Utah and Virginia have made progress on their own AI bills.

Advocates for regulating AI in California seem similarly unconcerned, according to Teri Olle, vice president for progressive nonprofit Economic Security California Action. The group’s 501(c)3 partner, the Economic Security Project, was co-founded by Facebook co-founder Chris Hughes and promotes guaranteed income programs.

 

The group was an organizational co-sponsor of California’s Transparency in Frontier Artificial Intelligence Act, known as SB 53. The law requires large frontier developers to publish AI frameworks to explain how they incorporate standards and best practices. It also requires the developers to file a summary of a catastrophic risk assessment.

The bill’s sponsor, Democratic state Sen. Scott Weiner, has been outspoken in defending states’ ability to regulate where Congress hasn’t.

Olle called the executive order a “harassment scheme.” She also predicted that California would fight any potential lawsuits from the Trump administration.

“I have no indication that California would . . . allow its rights to be trampled,” Olle said.

Olle said she is not currently aware of concerns about BEAD funding or other federal grants being potentially denied to California, even as the state works on its budget.

Olle said she was surprised at the relative ease with which California’s law passed through the legislature, though she noted that tech industry CEOs “did not like the fact that they were . . . being curtailed in any way.” And going forward, she sees those CEOs as more of an obstacle to further AI legislation in California than the administration itself.

“The tech CEOs are not taking any of this sitting down, like, I think that’s the thing that’s harder . . . The thing that’s probably more impactful is the fact that the tech CEOs . . . their aligned interests have put, you know, hundreds of millions of dollars into PACs to try to defeat candidates,” she said.

Olle says that money is up against public opinion that supports regulating AI. A Gallup poll conducted last year found that 80% of those surveyed supported “maintaining rules for AI safety and data security, even if it means developing AI capabilities at a slower rate.”

“The actual politics of this issue are very squarely on the side of common sense regulation of tech,” Olle said.

GOP states’ views

The politics of the executive order, however, may be different in Republican-led states.

The Texas Responsible Artificial Intelligence Governance Act, known as HB 149, outlaws developing or deploying AI “with the intent to unlawfully discriminate against a protected class in violation of state or federal law.” It goes on to state that disparate impact is not sufficient to prove intent. The Texas law also requires government agencies who deploy AI systems to disclose when interactions are AI-generated.

David Dunmoyer, of the conservative nonprofit Texas Public Policy Foundation, said “there was a lot of disappointment” in Texas in reaction to the executive order.

“There’s a sense that states are being punished for stepping up and leading on, in the case of Texas, a really good piece of legislation that’s thoughtful and intentional,” Dunmoyer said.

He said that parts of Texas’ law appear to line up with the intentions of the executive order and its carve-outs to preserve state laws governing child safety, data center infrastructure and state government procurement and use of AI.

He highlighted the Texas law’s prohibition on government entities using AI for “social scoring” as consistent with the administration’s opposition to “viewpoint discrimination” in AI.

He also said that the Texas law’s focus on outcomes may be more consistent with the types of policies being put forward by Sacks in the White House.

“A lot of the Texas approach isn’t on ‘check these boxes before you operate.’ Instead it’s, ‘there’s a demonstrable harm that has taken place with the intention of creating those bad outcomes,’” he said.

Kevin Welch, president of digital civil liberties group EFF-Austin, said that other portions of the law could fit the carve-outs.

The law prohibits “developing or distributing an artificial intelligence system with the sole intent of” producing child sexual abuse material, which could be included in the child safety carve-out. The transparency provision governs state government use of AI, which could also be exempted from the order.

But even if parts of Texas’ law wouldn’t fit a carve-out, Welch thinks the state might be safe from having BEAD or other funding pulled for political reasons.

“When they use that tool, it tends to, in my observation, be more partisan on the Trump administration’s part. They’re far more likely to use threats like that against what they perceive as blue states,” Welch said.

He predicted that if the Trump administration has disagreements with Texas’ law, state leaders and the White House are more likely to discuss their differences and “have a dialogue.”

Dunmoyer said that the involvement of BEAD funding in the executive order does raise hard questions for Texas, which was approved last year for $1.27 billion in broadband deployment funds under the program.

“If it came down to, you pick, keep the AI law or connect the disconnected in vulnerable and rural communities, that’s a tremendously hard political decision to make,” Dunmoyer said.

He said a decision on whether to fight a lawsuit could also depend on who is elected as Texas’ new attorney general in November.

Dunmoyer said Texas lawmakers are balancing the needs of state residents with the “political realities” on AI, all while the state’s legislature is out of session in 2026.

“In conversations with lawmakers, there’s definitely this sense of, okay, let’s pause and wait and see what happens with the executive order,” Dunmoyer said.


©2026 CQ-Roll Call, Inc., All Rights Reserved. Visit cqrollcall.com. Distributed by Tribune Content Agency, LLC.

 

Comments

blog comments powered by Disqus

 

Related Channels

The ACLU

ACLU

By The ACLU
Amy Goodman

Amy Goodman

By Amy Goodman
Armstrong Williams

Armstrong Williams

By Armstrong Williams
Austin Bay

Austin Bay

By Austin Bay
Ben Shapiro

Ben Shapiro

By Ben Shapiro
Betsy McCaughey

Betsy McCaughey

By Betsy McCaughey
Bill Press

Bill Press

By Bill Press
Bonnie Jean Feldkamp

Bonnie Jean Feldkamp

By Bonnie Jean Feldkamp
Cal Thomas

Cal Thomas

By Cal Thomas
Clarence Page

Clarence Page

By Clarence Page
Danny Tyree

Danny Tyree

By Danny Tyree
David Harsanyi

David Harsanyi

By David Harsanyi
Debra Saunders

Debra Saunders

By Debra Saunders
Dennis Prager

Dennis Prager

By Dennis Prager
Dick Polman

Dick Polman

By Dick Polman
Erick Erickson

Erick Erickson

By Erick Erickson
Froma Harrop

Froma Harrop

By Froma Harrop
Jacob Sullum

Jacob Sullum

By Jacob Sullum
Jamie Stiehm

Jamie Stiehm

By Jamie Stiehm
Jeff Robbins

Jeff Robbins

By Jeff Robbins
Jessica Johnson

Jessica Johnson

By Jessica Johnson
Jim Hightower

Jim Hightower

By Jim Hightower
Joe Conason

Joe Conason

By Joe Conason
John Stossel

John Stossel

By John Stossel
Josh Hammer

Josh Hammer

By Josh Hammer
Judge Andrew P. Napolitano

Judge Andrew Napolitano

By Judge Andrew P. Napolitano
Laura Hollis

Laura Hollis

By Laura Hollis
Marc Munroe Dion

Marc Munroe Dion

By Marc Munroe Dion
Michael Barone

Michael Barone

By Michael Barone
Mona Charen

Mona Charen

By Mona Charen
Rachel Marsden

Rachel Marsden

By Rachel Marsden
Rich Lowry

Rich Lowry

By Rich Lowry
Robert B. Reich

Robert B. Reich

By Robert B. Reich
Ruben Navarrett Jr.

Ruben Navarrett Jr

By Ruben Navarrett Jr.
Ruth Marcus

Ruth Marcus

By Ruth Marcus
S.E. Cupp

S.E. Cupp

By S.E. Cupp
Salena Zito

Salena Zito

By Salena Zito
Star Parker

Star Parker

By Star Parker
Stephen Moore

Stephen Moore

By Stephen Moore
Susan Estrich

Susan Estrich

By Susan Estrich
Ted Rall

Ted Rall

By Ted Rall
Terence P. Jeffrey

Terence P. Jeffrey

By Terence P. Jeffrey
Tim Graham

Tim Graham

By Tim Graham
Tom Purcell

Tom Purcell

By Tom Purcell
Veronique de Rugy

Veronique de Rugy

By Veronique de Rugy
Victor Joecks

Victor Joecks

By Victor Joecks
Wayne Allyn Root

Wayne Allyn Root

By Wayne Allyn Root

Comics

Gary Varvel Dave Granlund Dick Wright Bart van Leeuwen Jeff Danziger Adam Zyglis