I hope you’re not balking too much about the title of this article. “What? Another type of retrospective??” Yes, we already have sprint retrospectives, performance reviews, and yearly assessments. However, bear with me here. Impact retrospectives are the only ones that are guaranteed to improve your… impact. It’s one of the most effective tools in my Engineering Impact toolkit to help create world-class engineering teams. If you could ensure that your team does less busywork, understands the client and business better, optimizes work, and builds a better rapport with Product, wouldn’t you be happy to trade that for a meeting once every 2-3 months?
Meta Reviews
I was recently asked to expand on the best practices for meta reviews, which is what I called Impact Retrospectives several years ago in The Tech Executive Operating System. The original name was used to contrast these with the regular iteration reviews we’re used to doing. While in the latter, we talk about the tactics of the execution (what was missing in the requirements, why didn’t we take into account that holiday, the client outage that threw the whole sprint out of whack), impact retrospectives are more strategic.
Consider how a team could potentially hit 100% of its commitments sprint after sprint yet achieve zero business outcomes. Actually, sometimes I’ve seen more senior-heavy teams suffer from this more often. That’s because it’s easier to execute on requests, so when features only take a few days to implement, the “barrier” to deciding to do them is lower. That means we can more easily get sucked into a busywork whirlpool where we deliver nothing of substance.
Add to that the point in time of regular retrospectives: they take place right when we finish delivering the work and sometimes mere minutes before we dive head first into the next sprint. That means, by definition, that we didn’t have the opportunity to see our changes and their impact for long. For most startups, those that don’t have loads of traffic, the results of the changes will take time to become apparent. We’ll need to wait and see what Sales report about the effect it has in demos or wait for enough traffic to make the experiment significant.
Thus, engineers regularly go through years without really being connected to the fruit of their labors. Software is being shipped into the void without any real feedback. Yes, we might realize the business is growing faster or slower. If we’re lucky, someone will share a nice customer quote every few months. But that’s hardly enough if you want to create a remarkable organization.
The Premise
While regular sprint retrospectives should handle the team’s productivity by uncovering issues in planning and execution, impact retrospectives are about efficacy. We cannot settle for racking up tasks in the “done” column and feel good about ourselves. If a feature is delivered and it has no impact, was it really delivered? I’ll try to stop it with the Zen koans.
As you can see in the double-axis chart above, we want both productivity and efficacy to unlock our team’s potential. Productivity alone results in busywork and flailing around (and efficacy without productivity is too little, and you’ll be too late, like watching your startup in slow motion). The most effective tool to move to the right quadrant is impact retrospectives.
How to Run it
I find that for most startups I’ve helped, the ideal cadence is somewhere between 2-3 months. However, if you can get enough signal about your features faster, even once a month is good.
While these provide lots of value directly to Engineering, they should actually be led by Product (which will mean you’ll have to step up your game and learn how to work with them effectively, another tenet of successful tech leadership that I help my clients with). For each retrospective, go over the newly delivered work that hasn’t been reviewed yet. For your first retrospective, you can just consider things from the past quarter or so.
Then, for this list, collect the evidence of what that work has actually achieved. How much was that new feature used? Did we see improvements in the KPIs? Do you have some client testimonials regarding the feature? If you’re using all sorts of call-recording or AI transcription services, you can query them to collect real quotes about how the work has impacted clients.
The goal should be to assert for every feature whether it was actually worth developing in the first place and what we can learn from it for the future. For example, if you’ve opted for delivering it with a customization option (which always requires more development time and effort in maintenance), was the customization used enough to warrant the added complexity? When a feature doesn’t move the needle as expected, can we learn to spot areas where we could perform less work in the future, at least until enough positive signal is collected?
In addition, what were the remarkable successes? Sometimes, something becomes much more significant than we had anticipated. Was a capability useful for other use cases than it was originally envisioned? What can we learn from that? Are there other similar opportunities?
From such a retrospective, you should gain the following:
- Concrete guidelines to make future work more effective, reducing scope to deliver results faster.
- Opportunities to leverage wins even further.
- Definition-of-done improvements based on what the team sees working.
- A better connection of the team to the results of its work, improving its Product Mastery.
- Improved ability to spot innovation opportunities in the future due to the team understanding the business better.
- Motivation and alignment improvements. Even when we realize we didn’t succeed, we can see steps taken to improve.
- A healthier relationship between Product and Engineering where they learn to collaborate and discuss business results.
- A team that’s moved more toward being Product Engineers.