If you’re stuck using the same old retro format (Stop, start, continue) then try this technique instead.
Many teams follow the same format of retrospective week in, week out. If this works for you then great. But what experience shows us is that continuous improvement activity can become stale and can often stall. Most retros are based on individuals opinions. There are books and websites containing lots of ideas on new formats but the majority of the suggested new formats are based on subjective approaches and opinion. All teams are sat on a trove of data ready to reveal insights into how they can improve. Why don’t you take a data driven approach to your next retrospective?
The Data Driven Retrospective
The format of the retrospective goes something like this:
- Present the data to the team
- Drill down into the data looking for patterns or outliers
- Ask why, ask who, ask how, and take action!
The chart above was generated in Mingle (ThoughtWorks Studios). It shows average weekly cycle time data over a 6 month period. There are many issues with this tool, particularly the fact that it averages – but that’s for another post. Putting all it’s shortcomings aside, it still serves as an excellent catalyst for analysing sources of variation.
Drill down on outliers provides further exploration of the dataset.
Teams who are tracking cycle time and using this data in retrospectives are learning to spot patterns of what good and bad looks like. We are consistently seeing cycle times improving in terms of reduced cycle time and reduced variation in cycle time when teams measure and manage this metric.
Some teams use what they call a dwell time chart as follows, which aims to identify excessive queuing time:
The chart colours represent the various statuses a story passes through and how long it has been in each status. The white parts of the stacked bar chart represent waiting time. In order to improve flow efficiency teams should look to reduce any waiting time in queues. Causes of waiting time include too much work in progress, blocked work and bottlenecks.
I encourage all teams to keep unblocked blockers in an envelope at the end of their card wall. Basically, when you unblock a card, don’t throw the magenta post-it away. Stick it in the envelope. You can then lay out these stickies on the table in your retrospective and group them. Again, another set of data points that may provide insight.
The use of quality metrics in retrospectives is becoming more prevalent. Tech Leads are exploring stats, e.g. test coverage trends, and the team are responding with probing questions. Review the reports produced by your CI/CD pipeline tooling. SonarQube produces some great reports.
Successes & Feedback
I’ve been experimenting with data driven retros across many of the teams I coach. At a rough guess I would estimate 30-40 teams are now using this form of retrospective in preference to any other format. So why is it proving to be so popular?
The buzz and energy in the room during these retro’s is almost deafening. Participants are no longer expressing opinions but rather exploring data for insight. Cognitive biases are certainly reduced when compared to the common (works well / could improve / puzzles) format of previous retrospectives.
Some teams have taken a blended approach, for example first 30 mins data driven followed by works well / could improve / puzzles approach. Others have adopted an alternating approach whereby every other retro is data driven.
In this photo, multiple delivery teams come together to run an Ops Review.
In addition to getting more data focused I’ve been coaching teams to avoid waiting for the next retro to look for improvements. Most of this data is available real-time. So why not retrospect in real-time? Scrum Masters (or equiv job titles!) should be watching queues for long waiting times, tech leads should be looking at code quality metrics daily, and management should be spotting patterns of recurring blockers.