Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

The Mess States Are Making of AI Regulation

Lawmakers are proposing hundreds of measures to micromanage and control this emergent technology. A complicated regulatory framework could devastate America’s technology businesses and global competitiveness.

Illustration of a bundle of red tape on the left with a line being drawn out in front of it towards the right with a large pencil. Small stick figures are running along the line, as if they are escaping from the red tape. Light gray background.
Shutterstock
Congressional gridlock and fears of rapid technological advancement in artificial intelligence have uncorked an explosion of counterproductive state AI laws and proposed legislation across the country.

Sadly, America’s vaunted laboratories of democracy are mostly brainstorming different ideas to micromanage and control the development of unique and competing artificial intelligence models, with potentially devastating consequences for America’s technology businesses and global competitiveness.

Imagine if burdensome rules and regulations had stopped the United States from besting the Soviets in the space race after the Sputnik launch. Today, we risk that doomsday scenario in the race to technological preeminence against China.

In 2023, lawmakers in at least 31 states introduced more than 190 AI-related proposals, according to data from the Software Alliance, an industry trade group. That represented a 440 percent increase over the number introduced in 2022. Only a handful of those bills were enacted, but that hasn’t discouraged prospective AI regulators: Today, state legislatures and the District of Columbia are considering nearly 650 pieces of AI-related legislation, many of which define AI differently.

How AI startups and small businesses, often strapped for cash and resources, are supposed to comply with such a potentially complicated regulatory framework remains a mystery. But what is clear is that this is no way to usher in the next technological revolution, nor is it even certain that this revolution will happen under American leadership if the United States does not play its cards right.

Make no mistake: Competition in AI is fierce, and it’s global. Chinese technology companies including Baidu, Huawei and iFlytek are developing their own proprietary AI models, but that country’s reliance on American technology has given the United States a leg up in the global AI race — for now.

Chinese companies use Meta’s open-source Llama model to train some of their AI models and the know-how from American chip-industry engineers and designers to develop their own hardware to run them. But American dominance in open-source AI is far from guaranteed. Meta first released its Llama model in February of last year, and it was quickly outpaced by Falcon 180B from the United Arab Emirates. Only recently did Meta reclaim the edge with its release of an even larger open-source AI model.

Concerned about the global race for AI abroad but unwilling or unable to free it from the strictures of state-government regulations at home, lawmakers meanwhile continue to trumpet legislation that will harm AI development and adoption.

California lawmakers have proposed many bad ideas, with one particularly ill-advised bill passed by the state Senate that requires AI developers to make a positive safety determination under threat of felonious perjury. Under these kinds of rules, American innovators doing everything right can be held criminally responsible for the actions of others even if they take all necessary precautions to make their AI models safe, crippling open-source AI development and adoption.

Legislation proposed in Hawaii would explicitly enshrine what’s known as the precautionary principle into state law so that AI innovators must prove to the government that their technology does not risk health, safety or the environment before it can be adopted and used. Colorado Gov. Jared Polis lamented the negative effects that one recent measure passed by his state’s Legislature may have on “critical technological advancements,” but signed it anyway. None of this is an environment for AI development to grow and thrive.

In a global economy, companies have options that span continents. Many of those companies choose the United States — and for good reason. The U.S. boasts a favorable tax climate, ample research institutions, talented labor pools and, historically, a strong and fairly consistent rule of law. But there is nothing consistent about an unpredictable AI regulatory landscape spiraling out of control.

Big-government federalism erodes these advantages at every turn just as the next technological revolution sweeps the planet. If the Internet would have emerged in such a heavily regulated environment, plagued by a patchwork of state laws and top-down rules, the United States might never have become the global leader of the Internet age.

The United States desperately needs a pre-emptive federal framework that permits experimentation and technological change. That won’t happen if states are left alone to regulate emergent artificial intelligence models. The stakes could not be higher.

Logan Kolas is the director of technology policy at the American Consumer Institute, a nonprofit education and research organization that promotes consumer-focused free-market solutions to state and federal policy challenges.



Governing’s opinion columns reflect the views of their authors and not necessarily those of Governing’s editors or management.