Navigating US State-Level AI Regulations: A Practical Guide for Agencies
Your agency just signed a contract with a national insurance company that operates in 42 states. The project involves building an AI-driven claims processing system that automatically approves, routes, or flags insurance claims. During the scoping call, the client's compliance officer asked a straightforward question: "Which state AI regulations apply to this system, and how will you ensure compliance?" Your team went silent. You knew about the EU AI Act. You had a vague awareness that Colorado had passed something about AI. But the idea that there might be a patchwork of state-level AI regulations affecting a single deployment hadn't been on your radar. After the call, you started researching and discovered that your client's AI system would potentially need to comply with different requirements in Colorado, Illinois, New York City, Maryland, Connecticut, Texas, and several other jurisdictions โ each with its own definitions, thresholds, and obligations.
While the federal government continues to debate comprehensive AI legislation, US states and municipalities are not waiting. They are enacting AI regulations that directly affect the systems agencies build and deploy. And because most enterprise clients operate across multiple states, a single AI system may need to comply with regulations from a dozen or more jurisdictions. For agencies, this creates a compliance challenge that's complex but navigable โ if you have the right framework.
The State AI Regulation Landscape
US state AI regulation is developing along several tracks, each addressing different aspects of AI governance.
Employment and Hiring AI
This is the most active area of state AI regulation, driven by concerns about algorithmic discrimination in hiring.
New York City Local Law 144 was one of the first major AI regulations in the US. It requires employers and employment agencies using automated employment decision tools (AEDTs) to conduct annual bias audits by independent auditors and provide notice to candidates that an AEDT is being used. The law defines AEDTs broadly to include any computational process that substantially assists or replaces discretionary decision-making in employment.
Illinois AI Video Interview Act requires employers that use AI to analyze video interviews to notify candidates, explain how the AI works, and obtain consent before using the technology. Candidates can request deletion of their video and that a human review their application.
Maryland's HB 1202 restricts the use of facial recognition technology in hiring, requiring applicant consent before facial recognition can be used during interviews.
Colorado's AI Act (effective 2026) takes a broader approach, requiring deployers of high-risk AI systems โ including those used in employment decisions โ to conduct impact assessments, provide notice to consumers, and implement risk management programs. It applies to any AI system that makes or substantially contributes to consequential decisions about consumers.
What this means for agencies: If you build AI tools used in hiring, recruiting, or talent management, you need to ensure your systems can comply with the jurisdiction-specific requirements of every state where your client operates. This includes bias audit capabilities, notice generation, consent management, and human review mechanisms.
Consumer Protection and High-Risk AI
Several states are regulating AI through consumer protection frameworks.
Colorado's AI Act is the most comprehensive state-level AI regulation in the US. It applies to "high-risk AI systems" that make or substantially contribute to "consequential decisions" in areas including employment, education, financial services, healthcare, housing, insurance, and legal services. Deployers must implement risk management programs, conduct impact assessments, provide notice to consumers about AI use, provide the ability to appeal AI decisions, and disclose to the attorney general if they discover substantial risk of algorithmic discrimination.
Connecticut's AI Act requires deployers of high-risk AI systems to conduct impact assessments, notify consumers when AI is used in consequential decisions, and provide consumers with the ability to appeal AI decisions.
Texas has enacted legislation requiring disclosure when AI is used in certain consumer-facing decisions and providing consumers with the right to know when they're interacting with AI-generated content.
What this means for agencies: If you build AI systems that affect consumers in any of the regulated decision areas, your systems need built-in compliance features: notice mechanisms, appeal pathways, impact assessment support, and discrimination detection capabilities.
Privacy-Related AI Regulations
Several state privacy laws include provisions that affect AI systems.
California (CCPA/CPRA) gives consumers the right to opt out of automated decision-making technology, the right to know what personal information is being used in automated decisions, and the right to access information about the logic involved in automated decision-making.
Virginia (VCDPA) gives consumers the right to opt out of profiling in furtherance of decisions that produce legal or similarly significant effects.
Other state privacy laws in Colorado, Connecticut, Utah, Iowa, Indiana, Montana, Tennessee, and others include varying rights related to automated decision-making and profiling.
What this means for agencies: AI systems that process personal data must support consumer rights including opt-out mechanisms, data access requests, and transparency about automated decision-making logic.
Insurance-Specific AI Regulations
Insurance regulators have been particularly active in addressing AI.
Colorado's insurance AI regulations require insurers to test AI models for unfair discrimination before deployment and to demonstrate that the models don't use protected characteristics or their proxies inappropriately.
The NAIC (National Association of Insurance Commissioners) Model Bulletin provides guidance on the use of AI in insurance, emphasizing the insurer's responsibility for the AI systems used in their operations, regardless of whether the systems were developed in-house or by third parties. Multiple states have adopted or are considering adopting this guidance.
What this means for agencies: If you build AI systems for the insurance industry, your client's regulators will scrutinize the models closely. You need robust fairness testing, documentation, and the ability to demonstrate that your models don't discriminate against protected groups.
Deepfakes and Synthetic Content
Several states have enacted laws addressing AI-generated content.
California requires disclosure when AI-generated content is used in political advertising and provides legal remedies for individuals whose likeness is used in AI-generated content without consent.
Texas criminalizes the creation and distribution of deepfake videos intended to influence elections.
Multiple states have enacted laws addressing non-consensual intimate images generated by AI.
What this means for agencies: If you build content generation systems, your systems may need to include watermarking, disclosure mechanisms, or content provenance tracking to comply with these regulations.
Building a Multi-State Compliance Framework
Step 1: Map Your Regulatory Exposure
For each client and project, determine which state regulations apply.
Identify the relevant jurisdictions. Where does your client operate? Where are the end users or affected individuals located? AI regulations typically apply based on where the affected individuals are located, not where the deployer is headquartered or where the AI system is hosted.
Classify the AI system. Determine whether the system falls into regulated categories (employment decisions, consumer-facing decisions, insurance, etc.) in each jurisdiction. The same system may be classified differently in different states.
Map applicable requirements. For each applicable regulation, identify the specific obligations: notice requirements, bias testing requirements, impact assessment requirements, consumer rights, and reporting obligations.
Document the mapping. Create a regulatory compliance map that shows which regulations apply to the system and how each requirement is addressed. This document becomes a key governance artifact.
Step 2: Design for the Most Restrictive Standard
Rather than building different compliance mechanisms for each state, design your system to meet the most restrictive requirements across all applicable jurisdictions.
For bias testing, implement the most comprehensive testing methodology required by any applicable regulation. If one state requires testing across three demographic dimensions and another requires five, test across all five for every deployment.
For notice and disclosure, build a notice framework that can be configured for each jurisdiction's requirements. The underlying mechanism should be the same; the content and timing of notices should be configurable.
For consumer rights, implement the broadest set of consumer rights required by any applicable regulation. If one state requires opt-out of automated decisions and another requires appeal rights, build both into the system.
For impact assessments, conduct assessments that meet the most comprehensive requirements. One thorough impact assessment is more efficient than multiple partial assessments tailored to different jurisdictions.
This approach is more efficient than maintaining separate compliance mechanisms for each state, and it provides a buffer against new regulations that may impose additional requirements.
Step 3: Build Compliance Infrastructure
Several infrastructure components support multi-state compliance.
Notice management system. Build a system that delivers appropriate notices to users and affected individuals based on their jurisdiction. The system should be configurable to accommodate new notice requirements as regulations evolve.
Audit trail infrastructure. Build comprehensive logging that captures all AI decisions, the factors that contributed to each decision, and any human review that occurred. This audit trail supports both bias audits and individual appeals.
Consumer rights portal. For consumer-facing AI systems, build a portal where consumers can exercise their rights: opting out of automated decisions, requesting information about how AI was used in their case, and filing appeals.
Bias audit capability. Build the infrastructure to support regular bias audits, including the ability to segment results by jurisdiction, demographic group, and time period.
Impact assessment framework. Create a standardized impact assessment process that meets the requirements of all applicable jurisdictions.
Step 4: Monitor Regulatory Changes
The state AI regulatory landscape is evolving rapidly. New regulations are proposed and enacted regularly, and existing regulations are amended as regulators gain experience.
Subscribe to regulatory tracking services. Several organizations track state AI legislation and provide alerts when new laws are proposed, enacted, or amended.
Maintain a regulatory calendar. Track effective dates, compliance deadlines, and reporting obligations across all applicable jurisdictions.
Conduct quarterly regulatory reviews. At least quarterly, review the regulatory landscape and assess whether any changes affect your clients' AI systems.
Build regulatory flexibility into your designs. Design your compliance infrastructure to be configurable so that new requirements can be accommodated without rebuilding the system.
Step 5: Document and Communicate
Multi-state compliance requires clear documentation and communication.
Create jurisdiction-specific compliance guides for your clients. For each jurisdiction, document the applicable regulations, the compliance measures in place, and the client's ongoing obligations.
Brief client compliance teams. Your clients' compliance teams need to understand the regulatory landscape and their obligations. Provide briefings that explain the requirements in practical terms.
Include regulatory compliance in your deliverables. Your model documentation and model cards should include a regulatory compliance section that maps the system to applicable regulations.
Common Challenges and How to Address Them
Conflicting requirements. Occasionally, different states' regulations may appear to conflict. When this happens, consult legal counsel to determine the appropriate approach. In most cases, meeting the more restrictive requirement satisfies both.
Definitional differences. Different states define key terms differently. "Automated decision tool," "high-risk AI," and "consequential decision" may have different meanings in different jurisdictions. Your compliance framework needs to account for these definitional variations.
Enforcement uncertainty. Many state AI regulations are new, and enforcement patterns haven't been established. Err on the side of compliance rather than testing the boundaries of uncertain enforcement.
Preemption questions. If and when federal AI legislation is enacted, it may preempt some state regulations. Until that happens, state regulations must be complied with individually.
Resource constraints. Multi-state compliance is resource-intensive. For agencies with limited compliance resources, focus on the states where your clients have the most significant operations and the regulations with the most immediate enforcement implications.
Turning Regulatory Knowledge into a Service
Your understanding of the state AI regulatory landscape is valuable to clients who may not have the expertise to navigate it themselves.
Offer regulatory mapping as a service. Help clients identify which regulations apply to their AI systems and what compliance measures are needed.
Build compliance into your project proposals. Show clients that you understand the regulatory landscape and have built compliance into your approach. This differentiates you from competitors who treat regulation as an afterthought.
Provide ongoing regulatory monitoring. Offer a subscription service that alerts clients to regulatory changes affecting their AI systems and recommends compliance adjustments.
Your Next Steps
This week: Identify which states your current clients operate in and determine whether any existing AI regulations apply to the systems you've built or are building.
This month: Create a regulatory map for your most significant client engagement. Document all applicable state regulations, the compliance measures currently in place, and any gaps.
This quarter: Build a regulatory compliance template that your team uses at the scoping phase of every new project. Include a state regulation reference guide that's updated as new regulations are enacted.
The state AI regulatory landscape is complex and evolving, but it's navigable. Agencies that invest in understanding and complying with state regulations position themselves as trusted partners for enterprise clients operating across multiple jurisdictions. Those that ignore the patchwork until an enforcement action arrives will find the catch-up game expensive and disruptive. Start mapping now.