Sector-Specific AI Compliance Requirements: Healthcare, Finance, and Beyond
Your agency built an AI-powered patient risk scoring tool for a healthcare network. The model was technically excellent โ strong predictive performance, well-documented, and tested for fairness. But when the client submitted it for regulatory review as part of a broader clinical workflow, the FDA classified it as a Software as a Medical Device (SaMD) and required 510(k) clearance before deployment. Nobody at your agency had anticipated this classification. The client hadn't mentioned it because they assumed your team would know. The regulatory submission process added eight months and $200,000 to the project, and your agency's contract didn't account for either. The client was frustrated, your team was overwhelmed by regulatory requirements they'd never encountered, and the project nearly collapsed.
Every regulated industry has its own AI compliance requirements, and they don't replace general AI governance โ they stack on top of it. An AI system in healthcare must comply with HIPAA, FDA regulations, state health data laws, and general AI regulations. An AI system in financial services must comply with fair lending laws, SEC regulations, banking regulators' guidance, and general AI regulations. Agencies that serve regulated industries need deep knowledge of these sector-specific requirements, or they need to know when to bring in experts who do.
This guide covers the major sector-specific AI compliance requirements across healthcare, financial services, insurance, employment, education, and government โ the sectors where AI agencies are most active and where regulatory exposure is highest.
Healthcare AI Compliance
Healthcare is one of the most heavily regulated sectors for AI, with requirements from federal agencies, state regulators, and industry bodies.
FDA Regulation of AI/ML-Based Medical Devices
The FDA regulates AI systems that meet the definition of a medical device โ including software that is intended to diagnose, treat, cure, mitigate, or prevent disease or other conditions.
Software as a Medical Device (SaMD). The FDA has developed a risk-based framework for classifying SaMD. Classification depends on the seriousness of the health condition the software addresses and the significance of the software's role in the clinical decision.
- Class I (low risk) โ Software that provides information to supplement other clinical decision factors. May be exempt from premarket review.
- Class II (moderate risk) โ Software that drives clinical management or informs clinical decisions for non-serious conditions. Typically requires 510(k) clearance.
- Class III (high risk) โ Software that drives clinical management for serious or critical conditions. Requires Premarket Approval (PMA).
The FDA's approach to AI/ML updates. Traditional medical device regulation requires new regulatory submissions for significant changes. The FDA has developed a Predetermined Change Control Plan framework that allows manufacturers to make certain types of AI/ML modifications without new submissions, as long as the changes are within the scope of the approved plan.
Clinical decision support (CDS) exemptions. Not all healthcare AI requires FDA clearance. Software that meets the CDS exemption criteria โ displaying or analyzing information, being intended for healthcare professionals, not acquiring or processing signals from medical devices, and enabling the professional to independently review the basis for recommendations โ may be exempt from FDA regulation.
What agencies must do:
- Assess whether the AI system meets the FDA's definition of a medical device during project scoping, not after development
- If FDA-regulated, build the regulatory submission process into the project timeline and budget
- Implement Quality Management System (QMS) requirements that apply to medical device development
- Design the system to support the Predetermined Change Control Plan framework if ongoing AI/ML updates are anticipated
- Document the system using FDA-required formats, including a thorough risk analysis
HIPAA Requirements
The Health Insurance Portability and Accountability Act (HIPAA) imposes strict requirements on the handling of Protected Health Information (PHI).
- Minimum necessary standard. Only the minimum amount of PHI necessary for the intended purpose should be used. This affects training data selection and feature engineering.
- Business Associate Agreements (BAAs). Your agency must sign a BAA with the covered entity (healthcare provider, insurer, or clearinghouse) if you will access PHI.
- Security requirements. Technical safeguards including access controls, audit controls, integrity controls, and transmission security must be implemented.
- De-identification. If PHI is de-identified using HIPAA-approved methods (Safe Harbor or Expert Determination), it falls outside HIPAA's scope. Consider whether de-identified data is sufficient for model training.
State Health Data Laws
Many states impose additional requirements beyond HIPAA.
- California's Confidentiality of Medical Information Act (CMIA) provides protections that go beyond HIPAA in several areas
- New York's health data regulations impose specific requirements on the use of health data in AI systems
- Washington's My Health My Data Act applies to health data that falls outside HIPAA's scope
Financial Services AI Compliance
Financial services AI faces oversight from multiple federal and state regulators, each with specific expectations.
Fair Lending Laws
The Equal Credit Opportunity Act (ECOA), Fair Housing Act, and Community Reinvestment Act establish the foundation for fair lending compliance.
- Disparate impact testing. AI lending models must be tested for disparate impact across protected characteristics (race, color, religion, national origin, sex, marital status, age, receipt of public assistance)
- Adverse action notices. When a model contributes to a credit denial or unfavorable terms, the consumer must receive specific reasons for the adverse action. AI models must be able to generate these reasons, which requires a level of explainability.
- Model risk management. Federal banking regulators (OCC, FDIC, Federal Reserve) expect banks to follow SR 11-7 guidance on model risk management, which requires independent model validation, ongoing monitoring, and documentation.
SEC and FINRA Requirements
For AI in securities and investment management:
- Regulation Best Interest requires that recommendations be in the customer's best interest. AI systems that make investment recommendations must comply.
- Recordkeeping requirements require retention of communications and records related to investment recommendations, including AI-generated recommendations.
- SEC's AI proposals have addressed the use of AI in predictive data analytics, requiring broker-dealers and investment advisers to eliminate conflicts of interest in AI-driven recommendations.
Anti-Money Laundering (AML) Requirements
- Bank Secrecy Act (BSA) and FinCEN regulations require financial institutions to maintain effective AML programs. AI used in transaction monitoring must be tested and validated.
- Explainability requirements. When AI identifies suspicious activity, investigators need to understand why the alert was generated. Black-box models are generally unacceptable for AML monitoring.
What agencies must do:
- Implement comprehensive fair lending testing for all credit-related AI models
- Build adverse action reason generation capabilities into credit models
- Follow SR 11-7 model risk management guidance for models used by banking clients
- Ensure AI models used in AML comply with explainability requirements
- Maintain detailed model documentation that satisfies regulatory examination standards
Insurance AI Compliance
Insurance regulators are actively addressing AI use in underwriting, pricing, claims, and marketing.
Unfair Discrimination Testing
Insurance regulations prohibit unfair discrimination โ using protected characteristics or their proxies in ways that are not actuarially justified.
- Colorado's insurance AI regulations require insurers to test AI models for unfair discrimination before deployment and to report testing results to the insurance commissioner
- The NAIC Model Bulletin establishes expectations for insurers' governance of AI, including testing for unfair discrimination and bias
- State-level adoption of the NAIC guidance is proceeding, with multiple states implementing similar requirements
Actuarial Standards
- Actuarial Standards of Practice (ASOPs) are being updated to address AI and ML models used in insurance. Actuaries who develop or review AI models must follow these standards.
- Model transparency. Regulators expect insurers to understand how their AI models work, which creates challenges for complex ML models. Many regulators prefer or require interpretable models for core insurance decisions.
What agencies must do:
- Implement unfair discrimination testing that meets the standards being established by state insurance regulators
- Document models in a way that satisfies actuarial standards
- Design models with sufficient transparency to meet regulatory expectations
- Build reporting capabilities that comply with state-specific requirements
Employment AI Compliance
AI in employment decisions faces a complex web of federal and state requirements.
Federal Requirements
- Title VII of the Civil Rights Act prohibits employment discrimination based on race, color, religion, sex, and national origin. AI systems used in hiring, promotion, or termination decisions must not produce discriminatory outcomes.
- The Age Discrimination in Employment Act (ADEA) prohibits age-based discrimination. AI hiring tools must be tested for age-based bias.
- The Americans with Disabilities Act (ADA) requires reasonable accommodation and prohibits discrimination based on disability. AI hiring tools that use assessments (video interviews, game-based assessments) must be accessible and must not discriminate against individuals with disabilities.
- EEOC guidance on AI. The EEOC has issued guidance clarifying that employers are responsible for the discriminatory outcomes of AI tools, even when those tools are developed by third parties.
State and Local Requirements
- NYC Local Law 144 requires annual bias audits and candidate notice for automated employment decision tools
- Illinois AI Video Interview Act requires notice, explanation, and consent for AI analysis of video interviews
- Multiple states are considering or have enacted additional requirements for AI in employment
What agencies must do:
- Conduct adverse impact analyses using the EEOC's four-fifths rule
- Build notice and consent mechanisms for employment AI tools
- Support annual bias audits by independent auditors (as required by NYC and other jurisdictions)
- Ensure AI hiring tools are accessible to individuals with disabilities
- Provide the ability for candidates to request human review of AI-driven decisions
Education AI Compliance
AI in education is increasingly regulated, particularly regarding student data.
FERPA
The Family Educational Rights and Privacy Act protects student education records.
- Student data limitations. AI systems that process student education records must comply with FERPA's requirements for consent, access, and disclosure.
- Directory information. FERPA distinguishes between directory information (which can be disclosed under certain conditions) and non-directory information. AI systems must respect these distinctions.
- School official exception. Education technology providers may access student records under the "school official" exception if they have a legitimate educational interest, but this exception has specific requirements.
COPPA
The Children's Online Privacy Protection Act applies to AI systems that collect data from children under 13.
- Parental consent is required before collecting personal information from children
- Data minimization โ only data necessary for the stated purpose should be collected
- Deletion obligations โ personal information must be deleted when no longer needed
What agencies must do:
- Implement FERPA-compliant data handling for AI systems that process student records
- Comply with COPPA for any AI system that interacts with or collects data from children
- Build age-gating and parental consent mechanisms as needed
- Minimize data collection to what is necessary for the educational purpose
Government AI Compliance
AI systems built for government agencies face specific requirements.
- Federal AI Executive Orders have established requirements for AI used by federal agencies, including testing and safeguards for AI that could affect individual rights or safety
- OMB guidance requires federal agencies to establish AI governance structures, conduct impact assessments, and ensure adequate human oversight
- FedRAMP certification may be required for cloud-based AI systems used by federal agencies
- State and local requirements vary by jurisdiction but often include transparency, fairness, and accountability requirements for government AI
What agencies must do:
- Comply with federal AI governance requirements for systems used by federal agencies
- Obtain necessary security certifications (FedRAMP, StateRAMP) for cloud-based AI systems
- Implement transparency and accountability mechanisms that meet government standards
- Conduct impact assessments that satisfy government requirements
Building Sector Competence in Your Agency
Specialize or Partner
Not every agency can be expert in every sector's compliance requirements. Choose your approach:
Specialization. Focus on one or two sectors and develop deep compliance expertise. This is the most efficient approach for small and mid-sized agencies. Deep specialization becomes a competitive advantage.
Partnership. Partner with compliance consultants or law firms who specialize in the sectors you serve. This gives you access to sector expertise without the overhead of building it internally.
Hybrid. Develop internal expertise in your primary sectors and partner for occasional projects in other sectors.
Build Sector-Specific Frameworks
For each sector you serve, build a compliance framework that your team can follow.
- Regulatory map โ All applicable regulations with their key requirements
- Compliance checklist โ Step-by-step checklist for ensuring compliance during project delivery
- Documentation templates โ Templates tailored to the sector's regulatory documentation requirements
- Testing protocols โ Sector-specific testing procedures (e.g., adverse impact testing for employment, unfair discrimination testing for insurance)
- Reference materials โ Regulatory guidance documents, case studies, and precedents
Stay Current
Sector-specific AI regulations are evolving rapidly. Invest in staying current.
- Subscribe to regulatory updates from relevant agencies (FDA, SEC, OCC, state insurance commissioners, etc.)
- Attend sector-specific conferences and webinars on AI governance
- Engage with industry groups that track and influence AI regulation in your sectors
- Review enforcement actions and regulatory guidance as they're published
Your Next Steps
This week: Identify the sectors your agency currently serves and list the sector-specific AI regulations that apply. Assess your current compliance for each regulation.
This month: For your primary sector, build a comprehensive compliance framework including a regulatory map, compliance checklist, and documentation templates.
This quarter: Conduct a compliance audit of your active projects against sector-specific requirements. Address any gaps and update your processes to prevent them from recurring.
Sector-specific AI compliance is complex, but it's also a powerful competitive differentiator. Agencies that understand the regulatory landscape of their clients' industries can scope projects accurately, avoid regulatory surprises, and deliver systems that are compliant from day one. Agencies that treat compliance as an afterthought will face delayed projects, strained client relationships, and potential regulatory exposure. Choose expertise.