“Sorry, can’t talk on Friday, it’s really tight. We have our sprint planning.” How often have you said that to someone? The sprint planning and wrapping up ceremonies at many companies end up taking almost a full day every couple of weeks. That’s 10% of your time! Despite that huge investment, after all of that pomp and circumstance, what do you have to show for it?
I don’t mind the incessant focus on fast iterations—as long as it achieves meaningful, tangible impact. No one cares that your R&D organization flawlessly puts 100% of the tasks it committed to in the Done pile every sprint. Not a single CEO in the world decided to put together an engineering team because they like the idea of code being written. That’s a necessity for achieving an objective. Without that, fast iterations become a malady, and it is afflicting too many teams.
The Problem With Fast Iterations
Don’t get me wrong. Working with a sense of urgency and focus is not a negative thing. In fact, it is somewhat fundamental in order to create a highly impactful team. Nevertheless, the issues begin when we misinterpret the intention behind moving fast and go into cargo-culting mode.
Picture the following scene, which I find is prevalent, and consider whether it sounds familiar. A team is about to wrap up a two-week iteration and deploy the new feature or changes it has worked on around the end of this time frame. Often, sprints overlap the calendar so that the end of the workweek is the end of a sprint. The team just finished deploying the code moments ago, and minutes later, they are sitting down with Product to discuss the next sprint, which initiates after the weekend.
Since the code is brand new in production, the subsequent work that’s planned and now being committed to is not based on any learning. There was no time to see the effects of the new capabilities in the hands of users. Your leadership and product managers are rushing to come up with work plans before they even saw the thing working because you have to feed the beast—provide your engineers with tickets to chomp on.
During this sprint, you might finally realize how the previous sprint’s efforts perform and then schedule improvements based on the feedback for sprint #3. The bottom line is that you often end up committing to a bunch of work that you don’t genuinely know is needed or right. We don’t take the time to see the fruits of our labor.
If you think back to the concepts behind the agile manifesto, I don’t believe those bright people intended us to die on the hill of fast iterations. At the root here, the reasoning for the practice was that we need to achieve tight feedback loops. Working fast for the sake of working fast is not good enough. We need to learn from what we did and then incorporate that feedback to decide on the next steps, with the clear business benefits of it in mind.
Making Time for Feedback
What is the right way to move on then? The team is finishing work; we need to wait to gather results and feedback. What should the team do in the meantime?
At the very least, you should avoid continuing work on the same feature, assuming that you already know what steps should be taken. If the team can work for a while on something different, that’s fine. Other teams are incorporating different working rhythms to accommodate for the feedback time. After a cycle of development, teams get some form of a sabbatical to do other things. This time is used by Product to learn, gather insights, and plan the next phase properly.
In my upcoming book, The Tech Executive Operating System, I discuss the concept of intermissions. It has a multitude of benefits for everyone involved, some of which I’ve covered here. One of those is gaining the wherewithal to perform the learning phase before rushing on to the next sprint. With a cascading setup of intermissions, you can have a different team in a learning phase at any one point. During this time, your team can perform vital work that’s often not urgent enough to get scheduled (see the previously linked article for more details).
Meta Reviews
The complement for intermissions is meta-reviews. I’m assuming you’ve heard of retrospectives and postmortems and practice them regularly. However, these are often solely focused on the execution and its effectivity—doing things right. What’s missing is the higher-level consideration of efficacy—are we doing the right things?
Meta reviews are where your leadership and product people should sit down and review the actual results of the work that was performed. It’s great that the feature was done on time and without bugs, but does it actually achieve the results it was supposed to? In meta-reviews, we look at the work performed a while ago and its effects. Do the effects justify investing more time to improve and adjust it? Should you abandon it altogether?
Only once armed with the insights from meta-reviews can your leadership team decide on the right path forward. Commit to work plans prematurely, and you’re merely moving fast and breaking things, but moving where? When you don’t know where you’re heading, “no wind is the right wind.”