Connect with us

Hi, what are you looking for?

Brilliant AchievementBrilliant Achievement

Editor's Pick

Is a State AI Patchwork Next? AI Legislation at a State Level in 2024

Jennifer Huddleston

artificial intelligence

While Congress debates what, if any, actions are needed around artificial intelligence (AI), many states have passed or considered their own legislation. This did not start in 2024, but it certainly accelerated, with at least 40 states considering AI legislation. Such a trend is not unique to AI, but certain actions at a state level could be particularly disruptive to the development of this technology. In some cases, states could also show the many beneficial applications of the technology, well beyond popular services such as ChatGPT.

An Overview of AI Legislation at a State Level in 2024

As of August 2024, 31 states have passed some form of AI legislation. However, what AI legislation seeks to regulate varies widely among the states. For example, at least 22 have passed laws regulating the use of deepfake images, usually in the scope of sexual or election-related deepfakes, while 11 states have passed laws requiring that corporations disclose the use of AI or collection of data for AI model training in some contexts. States are also exploring how the government can use AI. Concerningly, Colorado has passed a significant regulatory regime for many aspects of AI, while California continues to consider such a regime.

Some states are pursuing a less regulatory approach to AI regulation. For example, 22 states have passed laws creating some form of a task force or advisory council to study how state agencies can use (or regulate) AI. Others are focused on ensuring civil liberties are protected in the AI age, such as the 12 states that have passed laws restricting law enforcement from using facial recognition technology or other AI-assisted algorithms.

Of course, not all state legislation fits these models, and some states have focused on specific aspects, such as personal image more generally (for example, the Ensuring Likeness Voice and Image Security Act in Tennessee) or AI deployment in certain contexts.

State AI Regulations Focused on Election-Related Use

Many lawmakers are concerned about potential misinformation on social media or deepfakes of candidates that they feel will be amplified given the access to AI technology. As a result, some have sought to pass state laws regulating deepfakes depicting political candidates or using AI to generate or spread misinformation related to elections. There is some variance. For example, some states ban such acts altogether, while others only require a disclaimer of AI use. But for the most part, existing state law is fairly consistent about the harms that legislators are trying to address and would likely be sufficient to address harmful-use cases.

The list of states that have passed such laws is long and includes both red and blue states. Alabama, Arizona, California, Florida, Idaho, Indiana, Michigan, Minnesota, Mississippi, New Mexico, New York, Oregon, Texas, Utah, Washington, and Wisconsin have all passed election-related AI legislation. 

While these laws may be well-intentioned to ensure the public has reliable election information, they can have serious consequences, particularly for speech. Severely restricting the use of AI, even if only within the election context, risks violating our First Amendment rights. For example, even something that may appear simple—like a ban on the use of AI in election media—can have far-reaching consequences. Does this mean that a candidate can’t use an AI-generated image in the background of an ad? Would it be illegal for a junior staffer to use ChatGPT to help write a press release? Without proper guidance, even the simplest of state laws will be overbroad and impact the legitimate and beneficial use of this technology.

Furthermore, such laws may not even be necessary. There is no denying some of the dangerous things that AI has been used for, including potentially in an election context, but research suggests that the threat from AI may be overblown. AI might be able to generate content faster than a human, but that doesn’t mean it does a better job of creating fakes. Additionally, tech companies are very good at spotting and removing deepfakes.

Model Level AI Regulation: A Concerning and Disruptive Patchwork

Perhaps the most potentially disruptive state-level attempts at AI regulation seek to regulate AI comprehensively or at the model level. The most notable examples of such an approach are legislation passed in Colorado and the proposal in California’s SB 1047. As with many issues in technology, AI is likely to be interstate or not clearly defined by a location within a certain state’s borders.

The concerning consequences of such an approach are explored more thoroughly in the Cato blog “Words to Fear: I’m From the State Government, and I’m Here to Help with AI Risk.” As stated in that piece, “This shortsighted regulatory playbook—constraining business models, burdening developers with responsibility for downstream risks, and targeting technologies instead of harms—is being employed all too often at the state level. After all, SB 1047 is a notorious vehicle for all three, making open-source AI development a compliance risk by requiring developers to lock down their models against certain downstream modifications, as well as targeting technical sophistication, not merely specific threats.”

State-level legislation like this risks an effect where the most restrictive policies become de facto federal policy that could have an impact well beyond state borders. Even when such regulations target deployment rather than development, they can still result in similar impacts or in citizens losing out on beneficial technologies available to others.

Where States Are Acting as Appropriate Laboratories: Civil Liberty Protection, Government Applications, and AI Studies

There are some cases where states may have the opportunity to act as laboratories of democracy and determine how certain AI questions might be resolved. These situations will be intrastate and are mostly related to opportunities to either restrain state power and preserve civil liberties or to provide positive-use cases for AI technology within state governments. Similarly, state legislatures, like the federal government, have an opportunity to examine how existing regulations may prevent beneficial AI development and deployment.

The list of states that have established task forces to study the proper way to regulate AI includes Alabama, California, Colorado, Connecticut, Florida, Illinois, Indiana, Louisiana, Maryland, Massachusetts, New Jersey, Oklahoma, Oregon, Pennsylvania, Rhode Island, Tennessee, Texas, Utah, Virginia, Washington, West Virginia, and Wisconsin.

There are some things states could consider in their examination of AI and its impact. State-level policy could focus on clarifying when the government or law enforcement can use AI as a way of resolving civil liberties concerns without having significant extraterritorial effects. This is like how restrictions on government access to data without a warrant can improve privacy concerns without the spillover effects of many data privacy laws or clarity around the deployment of facial recognition technology. This could also provide models of what laws work best and when there may need to be further consideration, such as warrant requirements for the use of certain technologies versus flat-out moratoriums or what consent looks like for the use of mandatory data.

Similarly, states have an opportunity to consider potential positive AI use in their own governments. This could include tools that help provide constituent services or improve efficiency that saves costs within government. Again, this could provide positive examples for other governments and positive interactions with the technology for consumers.

Finally, as with the federal government, states should consider if there are elements of their regulatory code that make the development of AI harder than it should be. This could include consideration of everything from existing tech policy regulations, such as data privacy, to more general regulations, such as occupational licensing requirements. AI could also provide opportunities for states to consider implementing sandboxing or other soft-law approaches for AI applications, such as driverless cars or AI-based advice in currently regulated fields.

Conclusion: What Comes Next, and What Does It Mean for AI Development?

As the United States considers what framework to potentially pursue for AI at a federal level, states have already undertaken significant legislation. In some cases, without strong federal preemption, these laws could significantly disrupt the development and deployment of AI technologies. There are, however, opportunities for states to consider only intrastate applications as they relate to ensuring civil liberties or restricting or embracing the government’s use of AI. What seems certain is that states will continue to consider a wide range of policies that could impact this important technology.

As AI continues to develop and become more integrated into our everyday lives, hopefully, some of the techno-panic will fade. Disruption and uncertainty often create such fears, and AI is not the first technology to do so. Policymakers, however, should not seek to regulate merely because of fear or uncertainty but only when there is a true need to prevent harm not already addressed. When acting, such policy should be narrowly tailored to address the harm while also considering important trade-offs to other values, such as speech and innovation.

Adi Kumar contributed to this blog.

Join The Exclusive Subscription Today And Get Premium Articles For Free
Your information is secure and your privacy is protected. By opting in you agree to receive emails from us. Remember that you can opt-out any time, we hate spam too!

You May Also Like

Editor's Pick

Thomas A. Berry and Alexander Khoury Since the creation of the market square, there have been boisterous, loquacious individuals who have solicited bids for...

Editor's Pick

We had a sneak preview of emerging leadership on the morning of July 12th. That was the morning the June Core CPI came in...

Tech News

Image: Samar Haddad for The Verge I’d been promised the future of tennis was in the desert. From the stands of the Next Gen...

Editor's Pick

Travis Fisher and Josh Loucks Just north of Boston in Everett, Massachusetts sits the poster child for irrational energy permitting in the United States....