Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

More States Reject Fear-Based AI Regulation

The course of legislation in Virginia and Texas suggests a way forward in regulating AI without stifling innovation.

Virginia Gov. Glenn Youngkin.
Virginia Gov. Glenn Youngkin vetoed an AI bill that he warned could throttle technology and jobs.
(Stephen M. Katz/The Virginian-Pilot/TNS)
New developments in Virginia and Texas signal that the debate over artificial intelligence (AI) policy could be turning in a more positive, pro-innovation direction in the states. Less than three months into the year, an astonishing 900-plus AI-related legislative proposals— roughly 12 per day — have already been introduced.

Most seek to impose new regulations on algorithmic systems. This represents an unprecedented level of regulatory interest in any emerging technology.

On March 24, Virginia Republican Gov. Glenn Youngkin vetoed a major AI regulatory measure that would have compromised the Commonwealth’s ability to continue leading state-level digital innovation. In his veto statement for HB 2094, the “High-Risk Artificial Intelligence Developer and Deployer Act,” Youngkin correctly noted that the bill “would harm the creation of new jobs, the attraction of new business investment, and the availability of innovative technology in the Commonwealth of Virginia.”

Additionally, the Chamber of Progress estimated that the bill would have imposed nearly $30 million in compliance costs on AI developers, which would have devastated the state’s small tech startups (as R Street testimony on the bill explained).

Importantly, Youngkin’s veto came just 10 days after Texas GOP Rep. Giovanni Capriglione introduced an overhauled version of his “Texas Responsible AI Governance Act” (TRAIGA), a bill that heavily regulated AI innovation in the state and attracted widespread opposition. While the original version was very similar to the Virginia bill, the new version (HB 149) sheds the most heavy-handed elements of the earlier measure.

As some states continue to pursue a European-style approach to AI regulation, these developments in Virginia and Texas represent a potentially important turning point in AI policy. They also better align state AI policy with a new national focus on AI opportunity and investment—particularly important in the wake of major Chinese advances in this field.

Rejecting the EU Approach


The Virginia AI bill Youngkin vetoed was one of many similarly worded bills pushed by the Multistate AI Policymaker Working Group (MAP-WG), a coalition of state lawmakers from more than 45 states attempting to create a consensus “AI discrimination” bill that could be repurposed across state legislatures. These copycat bills are currently pending in about a dozen states, including California, Connecticut, Massachusetts, Nebraska, New Mexico and New York.

Last May, Colorado became the first state to pass one of these AI discrimination bills; however, problems became evident even before implementation. The earlier version of Texas’ TRAIGA bill originally followed this same model but now largely avoids it.

These MAP-WG state AI bills meld elements of the European Union’s new AI Act and the Biden administration’s approach to AI policy, especially as articulated in its “Blueprint for an AI Bill of Rights.” Both approaches were fundamentally fear-based in that they viewed algorithmic systems as “unsafe, ineffective, or biased” and “deeply harmful.”

On Jan. 23, President Donald Trump repealed the Biden administration’s historically long 2023 AI executive order (EO) and replaced it with a new one. “Removing Barriers to American Leadership in Artificial Intelligence” stresses the need “to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”

On Feb. 11, Vice President JD Vance delivered a major keynote address at the Paris AI Action Summit that more fully developed this “AI opportunity agenda.” Vance also explained how “excessive regulation of the AI sector could kill a transformative industry just as it’s taking off,” and promised to “make every effort to encourage pro-growth AI policies.”

Other Policies Offer Protections


Despite this change in attitude and direction on AI policy from the new administration, many states continue to advance regulatory proposals that mimic Biden policy statements, viewing AI less as an opportunity for America to embrace and more as a danger to avoid. The influence of Europe’s regulatory model is broadly evident throughout the MAP-WG bills.

Like the E.U.’s new AI Act, these state bills seek to regulate hypothetical future harms that might come about from AI systems. These bills are particularly concerned with the potential for “algorithmic bias” or other harms developing from “high-risk” AI applications.

Importantly, if such harms did develop, then many existing state and federal policies — including overlapping civil rights laws and unfair and deceptive practices regulations — would address these issues. But, much like European tech regulations, these new state AI anti-discrimination bills seek to regulate preemptively, before any such harms are proven.

This sort of technocratic ex ante regulation can be costly and confusing because state bureaucrats must determine which AI innovations are allowed to go to market based on speculative fears. As Youngkin explained in his veto statement, “HB 2094’s rigid framework fails to account for the rapidly evolving and fast-moving nature of the AI industry and puts an especially onerous burden on smaller firms and startups that lack large legal compliance departments.”

Avoiding Colorado's Broken AI Model


Youngkin’s veto and the introduction of the significantly revised Texas bill mean some state lawmakers are coming to understand the costs and complexities of such regulation. This was also the lesson of Colorado’s new AI law.

When Colorado was considering its AI regulation (SB24-205), several small and mid-sized tech entrepreneurs in the state sent a letter to lawmakers explaining how its “vague and overbroad” mandates “would severely stifle innovation and impose untenable burdens on Colorado’s businesses, particularly startups.” They specifically noted that efforts to predict “foreseeable” risks of general-purpose AI is “essentially impossible and invites litigation against fundamental and socially valuable innovations.” They also argued that the bill raised some First Amendment-related concerns.

Although Colorado Democratic Gov. Jared Polis chose to sign the bill into law, he agreed with the tech companies that it would “create a complex compliance regime for all developers and deployers of AI” through “significant, affirmative reporting requirements.” He also stated his concern regarding “the impact this law may have on an industry that is fueling critical technological advancements across our state for consumers and enterprises alike.”

Following that remarkable admission, Polis ordered the formation of a task force to address concerns “that an overly broad definition of AI, coupled with proactive disclosure requirements, could inadvertently impose prohibitively high costs on them, resulting in barriers to growth and product development, job losses, and a diminished capacity to raise capital.”

The Colorado AI Impact Task Force released some vague recommendations that failed to address these problems adequately. No solutions were proposed, mostly because every attempt to soften the blow of the new regulations met with opposition from pro-regulatory groups who wanted even more onerous mandates on innovators. The task force unsatisfyingly concluded its work in January, stating only that there were many “issues with firm disagreement on approach and where creativity will be needed.” Unfortunately, they failed to address the concerns Polis and state AI developers had about the law’s destructive potential.

This makes it clear that Colorado’s fundamentally flawed law is not a good model for other states to follow. Importantly, Polis was so concerned about its potential negative effects that he called for a “cohesive federal approach” to “limit and preempt varied compliance burdens on innovators and ensure a level playing field across state lines along with ensuring access to life-saving and money-saving AI technologies for consumers.”

In the meantime, other states should heed the lessons from Virginia and Texas. Youngkin’s veto and the highly revised Texas bill send a clear message to other state lawmakers and governors considering similar measures: It would be a mistake to impose costly and confusing mandates on America’s AI entrepreneurs by importing the European regulatory model to the United States.

There are better ways for states to address AI concerns than a heavy-handed, top-down, paperwork-intensive regulatory approach. As Youngkin concluded in his veto statement, “The role of government in safeguarding AI practices should be one that enables and empowers innovators to create and grow, not one that stifles progress and places onerous burdens on our Commonwealth’s many business owners.”

The same is true for every other state — and for the nation as a whole.

Adam Thierer is a resident senior fellow in technology and innovation at the R Street Institute, which originally published this article.



Governing’s opinion columns reflect the views of their authors and not necessarily those of Governing’s editors or management.