Most standups are status reports that nobody listens to. Most retrospectives are complaint sessions that produce no change. Both meetings have enormous potential to improve delivery and team performance, but only when run with discipline and purpose.
AI projects have unique characteristics that make effective team rituals even more important: outcomes are uncertain, technical approaches frequently pivot, and the gap between "the model seems to work" and "the model works reliably in production" is where most problems hide. Good standups surface these problems early. Good retrospectives prevent them from recurring.
The AI Agency Standup
Purpose
The standup exists to identify blockers, coordinate dependencies, and maintain awareness of project status. It is not a status report to management—it is a coordination tool for the delivery team.
Format
Duration: 15 minutes maximum. If it takes longer, the team is too large or the format is wrong.
Cadence: Daily for active project teams. Three times per week for teams with multiple concurrent projects.
Participants: Everyone actively working on the project. Optional for leadership—they should read the notes, not attend every standup.
The Three Questions (Modified for AI)
What did I complete since last standup? Be specific. "Worked on the extraction pipeline" is not useful. "Completed the extraction pipeline for invoice documents. Accuracy is at 87% on the test set, below our 92% target" is useful.
What am I working on today? Connect today's work to project milestones. "Optimizing extraction prompts to improve accuracy from 87% to target 92%. If prompt optimization does not reach target, I will evaluate alternative chunking strategies."
What is blocking me or at risk? This is the most important question. Common AI project blockers:
- Waiting for client data access
- Model performance not meeting targets
- Unclear acceptance criteria
- Integration dependencies
- Unexpected edge cases in production data
AI-Specific Standup Topics
Include these topics regularly:
Evaluation metrics check: Briefly share where current metrics stand against targets. This keeps the entire team aware of progress and prevents late surprises.
Data quality observations: Anything unusual or unexpected in the client data that was discovered during development. Early visibility into data issues prevents wasted effort.
Experiment results: If someone tested a new approach, prompt strategy, or model, share the result—even if it failed. Failed experiments inform the team's collective understanding.
Anti-Patterns to Avoid
The monologue standup: One person talks for ten minutes while others zone out. Enforce time limits (2 minutes per person) and redirect tangents to offline discussions.
The problem-solving standup: A blocker is raised and the team immediately tries to solve it. Standups identify problems, they do not solve them. "Let us discuss that after the standup" is the right response.
The management report: The standup feels like reporting to a boss rather than coordinating with peers. If leadership uses standups to micromanage, team members will stop sharing honestly.
The phantom standup: People go through the motions but nobody listens or acts on what is shared. If blockers are raised and never resolved, the standup loses credibility.
The AI Agency Retrospective
Purpose
The retrospective exists to improve how the team works. It identifies what is working well (keep doing it), what is not working (change it), and what actions to take (commitments with owners).
Format
Duration: 60-90 minutes for a two-week sprint. 90-120 minutes for a project-end retrospective.
Cadence: Every two weeks during active projects. Additionally, at the end of every project.
Facilitator: Rotate the facilitator role. The person facilitating should not be the project lead or the most senior person—this encourages more honest participation.
The Retrospective Structure
Step 1: Set the Stage (5 minutes)
Remind the team of the retrospective purpose and ground rules:
- What is said here stays here
- Focus on processes and systems, not individuals
- Be constructive—identify what to change, not just what is wrong
- Everyone participates
Step 2: Gather Data (15-20 minutes)
Each team member writes down observations on sticky notes (physical or digital):
- What went well? (Green)
- What did not go well? (Red)
- What surprised us? (Yellow)
- What should we try differently? (Blue)
Post all notes and group similar themes. Everyone reads silently.
Step 3: Generate Insights (20-30 minutes)
Discuss the major themes:
- Why did the positive things work? How do we ensure they continue?
- Why did the negative things happen? What is the root cause?
- What patterns do we see across projects?
For AI projects, probe specifically:
- Were our accuracy estimates realistic?
- Did we discover data issues early enough?
- Was the evaluation framework adequate?
- Did our deployment process work smoothly?
- Did client communication keep expectations aligned?
Step 4: Decide on Actions (15-20 minutes)
Identify two to three specific actions the team will take:
- Each action must be specific and achievable
- Each action must have an owner
- Each action must have a deadline
- Do not commit to more than three actions (more than three means none get done)
Step 5: Close (5 minutes)
Review the actions. Each owner confirms their commitment. Set a date to check on progress.
AI-Specific Retrospective Topics
Model evaluation: Was our evaluation dataset representative? Did production performance match evaluation results? What did we learn about evaluation that we should apply next time?
Prompt engineering: Which prompt approaches worked? Which did not? What prompt patterns should we add to our library?
Data pipeline: Were there data quality issues we should have caught earlier? What validation should we add to our standard process?
Client expectations: Were expectations aligned throughout the project? Where did misalignment occur and how do we prevent it?
Technical debt: What shortcuts did we take that need to be addressed? What would we build differently if starting over?
Making Retrospectives Produce Change
The biggest failure of retrospectives is identifying problems without implementing solutions. Ensure change happens:
Track actions: Maintain a shared list of retrospective actions with owners and deadlines. Review progress at the start of each retrospective.
Make actions visible: Post actions where the team sees them daily (project management tool, team channel).
Celebrate completed actions: When an action produces improvement, acknowledge it. This reinforces that retrospectives lead to real change.
Escalate persistent issues: If the same issue appears in multiple retrospectives without resolution, it needs leadership attention. Escalate it with the data showing it has been raised repeatedly.
Meeting Hygiene
For All Team Meetings
Start and end on time: Respect everyone's schedule. Do not wait for latecomers—start without them.
Capture notes: Assign a note-taker (rotate the role). Share notes within 24 hours.
No laptops in retrospectives: Full attention leads to better participation. Standups are short enough to survive without notes.
Remote-first design: If any participant is remote, run the meeting as if everyone is remote. Do not disadvantage remote participants with in-room side conversations.
Meeting Cadence for AI Agencies
Daily: Standup (per project team) Weekly: Team meeting (all-hands, 30 minutes—company updates, cross-project awareness) Biweekly: Retrospective (per project team) Monthly: Technical knowledge sharing (team learning session) Quarterly: Strategic review (leadership + team leads)
Keep the total meeting load under 20% of available work hours. More than that erodes delivery capacity.
Standups and retrospectives are simple meetings with outsized impact—when run well. Run them with discipline, focus on actionable outcomes, and make them a non-negotiable part of your delivery practice.