AI impact assessments are moving from best practice to legal requirement. The EU AI Act mandates them for high-risk systems. Colorado's AI Act requires them for high-risk automated decisions. Canada's proposed AIDA includes impact assessment requirements. And enterprise clients are increasingly requiring them as part of their internal governance processes, regardless of legal mandates.
An AI impact assessment evaluates the potential effects of an AI system on individuals, groups, and society before deployment. It identifies risks, proposes mitigations, and creates an auditable record of due diligence. For AI agencies, conducting impact assessments is both a governance requirement and a service that adds value to every engagement.
When to Conduct an Impact Assessment
Always Required
- AI systems that make or significantly influence decisions about individuals (employment, credit, insurance, healthcare, housing, education)
- Systems deployed in jurisdictions with impact assessment mandates (EU, Colorado, and others)
- Systems processing sensitive personal data
- Systems where the client's governance framework requires assessment
Strongly Recommended
- Customer-facing AI systems that interact with vulnerable populations
- Systems that process large volumes of personal data
- Systems with potential for bias or discrimination
- Systems replacing human decision-making in consequential processes
Proportionate Assessment
Not every project needs the same depth of assessment:
Full assessment (2-4 weeks): High-risk systems, regulated industries, large-scale deployments Standard assessment (1-2 weeks): Medium-risk systems, moderate data sensitivity, smaller scope Light assessment (2-5 days): Low-risk systems, internal tools, minimal personal data
The Assessment Framework
Section 1: System Description
Document what the AI system does:
- Purpose: What problem does the system solve?
- Functionality: How does the system work at a high level?
- Inputs: What data does the system process?
- Outputs: What decisions or recommendations does the system produce?
- Users: Who operates the system?
- Affected individuals: Who is affected by the system's outputs?
- Scale: How many people are affected and how frequently?
- Autonomy level: Does the system make decisions automatically, or does it support human decision-makers?
Section 2: Necessity and Proportionality
Evaluate whether AI is appropriate for this use case:
- Necessity: Is AI necessary to achieve the stated purpose, or could simpler methods work?
- Proportionality: Is the level of data processing proportionate to the benefit?
- Alternatives: Were non-AI alternatives considered? Why was AI chosen?
- Data minimization: Is the system using only the data necessary for its purpose?
Section 3: Rights and Freedoms Impact
Assess the potential impact on individuals' rights:
Privacy: Does the system collect, process, or store personal data? What protections are in place?
Non-discrimination: Could the system produce discriminatory outcomes based on protected characteristics? What testing has been conducted?
Due process: Can individuals challenge decisions the system makes about them? What appeal mechanisms exist?
Autonomy: Does the system respect individual choice, or does it manipulate behavior?
Transparency: Can affected individuals understand how the system works and why it made specific decisions?
Safety: Could the system cause physical, psychological, or financial harm?
Section 4: Risk Identification
Identify specific risks across categories:
Accuracy risks: What happens when the system is wrong? How often might it be wrong? What is the impact of errors on affected individuals?
Bias risks: Could the system disadvantage specific groups? What testing has been done? What populations are at risk?
Security risks: How could the system be attacked or manipulated? What data could be exposed? What defenses are in place?
Misuse risks: How could the system be used in ways not intended? What guardrails prevent misuse?
Dependency risks: What happens if the system becomes unavailable? What backup processes exist?
Drift risks: How might the system's performance change over time? What monitoring detects degradation?
Section 5: Risk Mitigation
For each identified risk, document the mitigation:
Technical mitigations: Bias testing, accuracy monitoring, security controls, input validation, output filtering, confidence thresholds.
Process mitigations: Human oversight, review procedures, escalation paths, audit schedules.
Organizational mitigations: Training, acceptable use policies, governance committees, incident response plans.
Residual risk: After mitigation, what risk remains? Is the residual risk acceptable given the benefits?
Section 6: Monitoring Plan
Define how the system will be monitored for the identified risks:
- What metrics will be tracked?
- How often will metrics be reviewed?
- What thresholds trigger investigation or intervention?
- Who is responsible for monitoring?
- How will monitoring findings be reported?
Section 7: Stakeholder Consultation
Document who was consulted during the assessment:
- Client stakeholders (business owners, compliance, legal, IT)
- Affected individuals or their representatives (where feasible)
- Domain experts (for specialized risk areas)
- Regulatory advisors (for compliance questions)
Include a summary of input received and how it influenced the assessment.
Section 8: Conclusion and Recommendation
Provide an overall assessment:
- Summary of findings: Key risks identified and mitigations proposed
- Overall risk rating: High, medium, or low residual risk after mitigation
- Recommendation: Proceed, proceed with conditions, or do not proceed
- Conditions (if applicable): What must be in place before deployment
- Review schedule: When the assessment should be reviewed and updated
Conducting the Assessment
Step 1: Preparation (Days 1-3)
- Gather system documentation (design documents, data flow diagrams, model documentation)
- Identify stakeholders to interview
- Review applicable regulations and client governance requirements
- Adapt the assessment template for the specific project
Step 2: Analysis (Days 3-8)
- Conduct stakeholder interviews
- Review technical architecture for risk factors
- Analyze data handling practices
- Evaluate bias testing results
- Assess security controls
- Map regulatory requirements to system features
Step 3: Risk Evaluation (Days 8-12)
- Compile identified risks
- Rate risks by likelihood and severity
- Evaluate existing and proposed mitigations
- Determine residual risk levels
- Develop recommendations
Step 4: Documentation (Days 12-15)
- Draft the assessment report
- Include supporting evidence and references
- Create executive summary for non-technical stakeholders
- Review with the project team for accuracy
- Finalize and deliver
Step 5: Review and Approval
- Present findings to client stakeholders
- Incorporate feedback
- Obtain sign-off from the appropriate governance authority
- File the assessment as part of the project record
Common Assessment Pitfalls
Pitfall 1: Assessment as Checkbox
An impact assessment done only to satisfy a requirement, without genuine analysis, provides no value and may not satisfy regulatory scrutiny. Conduct assessments honestly and thoroughly.
Pitfall 2: Assessing Too Late
Conducting the assessment after the system is built limits your ability to address findings. The most valuable time for an impact assessment is during the design phase, when architecture decisions can still be changed.
Pitfall 3: Ignoring Indirect Effects
Direct effects are obvious (the system denies a loan application). Indirect effects are less obvious but equally important (the system's error patterns cause certain communities to distrust the institution). Consider both.
Pitfall 4: No Follow-Through
An assessment that identifies risks but leads to no action is worse than uselessβit creates documented evidence that you knew about risks and did nothing. Ensure every significant finding has a documented action plan.
Pitfall 5: One-Time Assessment
Impact assessments should be reviewed and updated when the system changes significantly, when the deployment context changes, when new risks are identified, or on a regular schedule (at least annually for high-risk systems).
Pricing Impact Assessments
Impact assessments are a standalone service that should be priced and scoped:
- Light assessment (low-risk, small scope): $3K-$8K
- Standard assessment (medium-risk, moderate scope): $8K-$20K
- Full assessment (high-risk, large scope, regulatory requirements): $20K-$50K
- Annual review and update: 30-50% of the original assessment cost
Credit assessment fees toward the implementation project if the client proceeds. This positions the assessment as an investment in the project, not a standalone cost.
AI impact assessments are governance infrastructure that protects everyoneβthe client, the users, and your agency. Conduct them honestly, document them thoroughly, and use them to make better design decisions. They are one of the clearest signals of professional maturity in AI consulting.