The regulatory landscape for AI is moving fast. The EU AI Act is being enforced. US states are passing their own AI laws. Industry-specific regulators are issuing AI guidance. And your enterprise clients are increasingly asking: "How do you ensure the AI systems you build comply with applicable regulations?"
Most AI agencies ignore regulations until a client asks about them, then scramble to provide answers. This is a losing strategy. Regulation is not going away—it is accelerating. The agencies that understand the regulatory landscape, build compliance into their delivery process, and position regulatory knowledge as an expertise will win enterprise contracts. The ones that treat regulation as somebody else's problem will lose deals to agencies that take it seriously.
The Current Regulatory Landscape
EU AI Act
The most comprehensive AI-specific regulation globally. Key provisions:
Risk-based classification: AI systems classified into risk categories:
- Unacceptable risk: Banned entirely (social scoring, real-time remote biometric identification in public spaces with limited exceptions, manipulative AI)
- High risk: Subject to strict requirements (AI in employment, education, credit, law enforcement, critical infrastructure)
- Limited risk: Transparency obligations (chatbots must disclose they are AI, deepfakes must be labeled)
- Minimal risk: No specific requirements
Requirements for high-risk systems:
- Risk management system
- Data governance and management
- Technical documentation
- Record-keeping and logging
- Transparency and information to users
- Human oversight measures
- Accuracy, robustness, and cybersecurity
- Conformity assessment before deployment
Who is affected: Any entity placing AI systems on the EU market or deploying them in the EU, regardless of where the entity is located. If your client serves EU customers, the AI Act likely applies.
Implications for agencies: You need to understand which risk category your client's use case falls into and build compliance requirements into the project from the start.
US Federal Landscape
No comprehensive federal AI law yet, but significant activity:
Executive orders and guidance: Executive orders on AI safety and governance have established frameworks for federal AI use and encouraged responsible AI development.
Existing laws applied to AI: Anti-discrimination laws (Title VII, ECOA, ADA, Fair Housing Act) apply to AI-driven decisions. The FTC has enforcement authority over deceptive and unfair AI practices.
Sector-specific regulation: Financial regulators (OCC, Federal Reserve, FDIC) have issued AI guidance for banking. Healthcare regulators (FDA) regulate AI-based medical devices. Employment regulators (EEOC) have issued guidance on AI in hiring.
US State Laws
States are not waiting for federal action:
Colorado AI Act: Requires developers and deployers of high-risk AI systems to exercise reasonable care to avoid algorithmic discrimination. Mandates impact assessments and transparency.
Illinois AI Video Interview Act: Requires notice and consent when AI analyzes video interviews for hiring.
New York City Local Law 144: Requires bias audits for automated employment decision tools.
California and other states: Multiple bills proposed or enacted covering AI transparency, bias, and consumer protection.
International Regulations
Canada: The Artificial Intelligence and Data Act (AIDA) proposes requirements for high-impact AI systems.
Brazil: AI regulation modeled partially on the EU AI Act.
China: Multiple AI-specific regulations covering generative AI, algorithmic recommendations, and deepfakes.
UK: Sector-specific approach rather than comprehensive legislation, with regulators issuing AI guidance within their existing mandates.
What Agencies Need to Do
Understand the Client's Regulatory Environment
During discovery, identify the regulations that apply:
- Where does the client operate? (EU, specific US states, internationally)
- What industry is the client in? (Financial services, healthcare, education, employment)
- Who are the end users of the AI system? (Consumers, employees, patients, borrowers)
- What decisions does the AI system inform or make? (Employment, credit, healthcare, housing)
- Does the client already have a compliance framework? (What existing requirements apply?)
Map the answers to specific regulatory requirements. Document this analysis and share it with the client.
Build Compliance Into Delivery
Do not bolt compliance on at the end. Integrate it into every phase:
Discovery: Regulatory analysis and requirements identification. Define compliance requirements in the project scope.
Design: Architecture that supports compliance requirements (logging, human oversight, explainability). Include compliance in design reviews.
Development: Implement technical compliance controls (bias testing, transparency features, audit logging). Include compliance in the definition of done.
Testing: Compliance-specific testing (bias audits, fairness assessments, security testing, privacy testing). Document test results.
Deployment: Verify all compliance controls are active in production. Complete required documentation.
Maintenance: Ongoing compliance monitoring. Regulatory change tracking. Periodic compliance audits.
Develop Regulatory Expertise
Invest in understanding the regulatory landscape:
Assign ownership: Designate someone on your team to track AI regulatory developments. This does not need to be a full-time role but needs dedicated attention.
Build relationships: Connect with legal professionals who specialize in AI regulation. You need legal counsel available when complex regulatory questions arise.
Stay current: Follow regulatory bodies, industry groups, and legal publications that cover AI regulation. The landscape changes frequently.
Train your team: Ensure all team members understand the basics of AI regulation as it affects their work. Developers need to understand why they are implementing certain controls.
Document Everything
Regulatory compliance is fundamentally about documentation. Implement documentation practices that satisfy regulatory requirements:
Technical documentation: System architecture, model documentation, data handling procedures, security controls. Required by the EU AI Act for high-risk systems.
Risk assessments: Documented analysis of risks and mitigation measures. Required by multiple regulations.
Bias audits: Documented testing for algorithmic discrimination with results and actions. Required by NYC Local Law 144 and Colorado AI Act.
Impact assessments: Assessment of the AI system's potential impact on individuals and groups. Required by the EU AI Act and several state laws.
Decision logs: Records of design decisions, model selection rationale, and configuration choices. Supports audit trail requirements.
Monitoring records: Ongoing performance monitoring data, bias monitoring results, and incident records. Supports accountability requirements.
Compliance as Competitive Advantage
In the Sales Process
Use regulatory expertise as a differentiator:
"We understand the regulatory requirements that apply to your AI use case and build compliance into our delivery process from day one. This includes [specific requirements based on the client's situation]. Our compliance-ready approach protects your organization and avoids costly retrofitting."
In Proposals
Include compliance as a visible workstream in proposals:
- Regulatory analysis and requirements mapping
- Compliance-by-design architecture
- Bias testing and fairness assessment
- Required documentation deliverables
- Ongoing compliance monitoring
In Client Education
Help clients understand their regulatory obligations:
- Brief clients on regulations that affect their AI initiatives
- Explain the practical implications (what they need to do, not just what the law says)
- Help them prepare for internal governance review
- Connect them with legal resources for complex regulatory questions
Pricing Compliance Work
Compliance work is valuable and should be priced accordingly:
- Regulatory assessment (identify applicable regulations): $3K-$10K
- Compliance integration (build compliance into project delivery): 10-20% of project budget
- Bias audit (comprehensive bias testing and documentation): $5K-$15K per audit
- Compliance documentation package (all required documentation): $5K-$20K depending on scope
- Ongoing compliance monitoring (monthly monitoring and quarterly audits): $2K-$5K per month
Common Regulatory Mistakes
Mistake 1: Assuming Regulations Do Not Apply
Many agencies assume that because they are a service provider (not the deployer), regulations do not apply to them. This is increasingly incorrect. The EU AI Act imposes obligations on providers (developers) of AI systems, not just deployers.
Mistake 2: Waiting Until Asked
By the time a client asks about compliance, the project may already be designed in a way that makes compliance difficult. Address regulatory requirements proactively during discovery.
Mistake 3: Treating Compliance as Legal Work Only
Compliance has technical requirements (logging, bias testing, explainability) that need to be built into the system. Legal review alone does not make a system compliant.
Mistake 4: One-Time Compliance
Compliance is not a one-time activity. Regulations evolve, models change, and data shifts. Ongoing monitoring and periodic audits are necessary.
Mistake 5: Over-Engineering Compliance
Not every project is high-risk under every regulation. Apply proportionate compliance effort based on the actual risk level and applicable requirements. A low-risk content tagging system does not need the same compliance investment as a high-risk employment screening tool.
AI regulation is here and growing. The agencies that embrace it—building regulatory expertise, integrating compliance into delivery, and positioning it as a value proposition—will thrive. The ones that ignore it will find themselves locked out of the most valuable enterprise engagements as governance requirements become non-negotiable.