The AI Regulatory Landscape Across the States: A Look At States Laws on AI

By on October 27th, 2025 in Compliance, Industry Insights, Machine Learning
The blog title set in front of an image that has a gavel sitting on a circuitboard.

There’s no shortage of federal regulations in the financial industry. However, there’s a noticeable absence of federal oversight when it comes to governance over artificial intelligence (AI). A regulatory vacuum around AI policies has emerged at the federal level, which has ceded the initiative to individual states. This has prompted a specific set of state regulatory models on AI that are pioneering risk frameworks and how they interact with various use cases like AI debt collection strategies. 

The process of navigating the state-led AI legislation and regulations in US financial services is complex, and requires specific industry knowledge. In this blog post, we’re going to dive into why states are leading the charge, and the type of landmark laws and regulations some states are enacting that are going to set the tone moving forward. 

Why AI Regulations Are Being Led By States

Back in July, the current administration released “America’s AI Action Plan”, which is focused on building AI infrastructure with over 90 policies with the goal of being a global leader. AI technology is rapidly being integrated into the US economy and financial services industry. While there have been multiple AI-focused bills introduced in Congress over the past few years (including two new bills in July of 2025), none of them have gained enough traction to get passed. This led to increased tension between federal and state governments, which came to a head once the “One Big Beautiful Bill” was passed on July 4th, 2025. 

Originally, there was a provision in the bill proposing a ten year moratorium on states enacting or enforcing their own laws on AI. This was a top priority for technology companies who were trying to avoid a more complex regulatory landscape, but the provision was stripped in the bill’s final version. Experts agree this move was a clear signal that states are going to be the primary architects of public policy governing AI for the foreseeable future. 

The Federal Stalemate in AI Regulations

One of the root causes for why there’s been federal inaction towards AI regulations is a debate about how the process should look. There is one perspective pushing for a technology-neutral approach, claiming that existing laws are enough to govern AI technology. For example, the existing US laws on discrimination, fraud or defamation already apply to AI technology and businesses, so no new laws would be required. In short, this outlook focuses on punishing bad outcomes from AI rather than trying to regulate the technology itself. 

At the other side of this debate are regulators who want rules surrounding AI technology itself. There has been a wave of state laws and regulations that support this approach with Colorado and California pushing new requirements to address AI. They’re not just retooling old laws, states are creating novel legal categories like deployers and developers of AI and assigning them proactive duties of care. It’s a stance that believes states laws on AI need to have specific and rigorous rules in place to better protect consumers.

The Pioneering State Laws on AI You Should Know

The Colorado AI Act (CAIA)

The Colorado AI Act (CAIA), was the first comprehensive, risk-based AI law in the United States that was enacted in May of 2024. The CAIA created a framework for creators and users of high-risk AI (which many financial applications fall into) to follow. The Act states that developers and deployers of AI technology have a duty of “reasonable care” to protect consumers from the risks posed by the technology. One of those top risks called out by the CAIA is called algorithmic discrimination, which is the unlawful differential treatment by AI technology of an individual or group of individuals that are part of a protected class. 

The duties for developers and deployers under CAIA are met if those parties adhere to these obligations: 

  • Developer Obligations: These makers of AI technologies have to provide extensive documentation on their products. This includes data on how the AI is trained, what steps are made by the developer to prevent bias, what the foreseeable use cases of technology are and more. Developers also have to notify the Colorado Attorney General within 90 days if their AI technology has caused or is likely to cause an algorithmic discrimination. 
  • Deployer Obligations: Deployers (like a company using AI for debt collection), are required to implement and maintain a risk management program. They also have to perform annual assessments of AI technologies and notify consumers of changes being made. Consumers also have the right to correct any inaccurate data being used by AI systems with the right to appeal any decision they don’t agree with by human review.

The Financial Compliance Exception for Colorado’s AI Act

In Colorado’s AI Act, there’s an important provision for the financial industry and companies using AI for debt collection. The CAIA says that financial institutions such as banks, credit unions and insurers are considered to be already in full compliance with its requirements. However, this provision doesn’t provide universal protection. 

This exemption pressures federal agencies and other state banking institutions to develop their own AI governance rules. If they don’t, financial institutions that do business in Colorado could face punitive actions through CAIA. This law is set to indirectly influence the development of national AI regulation standards by creating the bar that other regulators have to meet.

California’s Draft Regulations for Automated Decision Making Technology

Another standout in state AI regulations is California’s draft regulations for Automated Decisionmaking Technology (ADMT). At this time, the draft regulations were approved and will go into effect on January 1, 2026, and they do represent the most consumer rights-focused AI regulations in the US. The regulations are designed for businesses that use ADMT to make important decisions about consumers. The state law definition of “important or significant decisions” includes financial services, lending, debt collection, insurance and much more. 

Since consumer well-being is at the core of these draft AI regulations, California is trying to establish three core rights: 

  1. Right to a Pre-Use Notice: Before a business uses ADMT for an important decision, they have to notify consumers explaining how the AI technology works in a way that’s easy for them to understand. 
  1. Right to Opt Out: Consumers have the right to tell businesses that they don’t want ADMT to be used for making important decisions about them. 
  1. Right to Access Information: Consumers will have the right to request information about the logic being used in ADMT processes. For financial institutions, this means businesses won’t be able to just deploy AI technology into their operations without having a deep understanding of how it works. 

Many experts say that these rights will cause a shift in how financial institutions interact with vendors who provide AI technology. The ability to easily explain the technology will go from being a nice-to-have, to a requirement. It’s likely there will also be an overhaul of risk management for AI technology vendors. Due diligence being done for these partnerships will have to go deeper to ensure that these new consumer rights are being honored.

What Does This Mean for AI Debt Collection?

AI in debt collection continues to increase in adoption because of how it lets businesses better honor consumer preferences while being able to scale. As bellwether states like Colorado and California are setting the standards for other states to follow, the laws and regulations surrounding AI are shaping up to be a patchwork system similar to that of debt collection compliance. 

For businesses that are looking to benefit from using AI in debt collection, you need a partner who’s an expert in compliance and keeping up state law and regulation developments. The state laws and regulations around AI are going to evolve just as fast, if not faster than debt collection rules. Debt collection strategies that are set up to quickly adapt are the most likely to achieve long-term success.

TrueAccord Is Built to Keep Up with Compliance and AI Changes

TrueAccord is an industry-leading recovery and collections platform that’s powered by patented machine learning. Our legal team follows developments in industry regulation updates across the country and maintains machine learning governance models to ensure complete compliance control

When the world is changing fast, you want a debt collection partner that has proven flexibility to quickly adjust to new rules and regulations. Contact us today and learn more about how you can collect more from happier people.