Every time I bring my car to the shop for its yearly checkup, I get nervous. I’m completely clueless. They’d tell me that I need to replace the thingamabob and the whatchamacallit, it costs like a new MacBook Pro, and I’d just stare blankly. I have no way of knowing if that makes sense or not. My solution was to buy a new car with a warranty.
Many of my clients have a similar kind of issue with their engineering organizations. CEOs would ask me if the time estimates or headcount they are getting are sensible because they have no way of knowing one way or another. I see a similar thing happening with tech executives, who understand how the sausage is made but don’t have anything to compare themselves to.
I’d love to link you up to my readymade online assessment tool where you answer a few questions, and it whips up an answer. Unfortunately, I’m not that smart. Nevertheless, having worked with companies of all sizes globally and talking to hundreds of tech executives every year, I have developed some heuristics that help me get a feel.
In my new book, The Tech Executive Operating System, I list these more in-depth as part of the self-service crash course for readers, but here they are in short (the book is coming out in April, to get updates and lots of free content subscribe at the end of this article).
This might be the most challenging aspect for objective comparison, which is why I’m starting with it. Every team is different, and so what might seem easy for one would be harder at others. There’s a reason we’re seeing a whole bunch of startups aimed at making these kinds of insights easier to gain.
Until those tools become viable, what I recommend is taking stock of your delivery metrics. How often are you deploying? What is the team’s cycle time? If you’re using some form of velocity calculation, like story points, can you see a trend? You’d like to see the story-points velocity at least remain the same with every iteration, and hopefully getting better gradually. Though, too many teams see velocity dipping just as they are hiring and growing (the infamous “we’ve doubled the team and halved the velocity”). You should not have an overall feeling that most of your projects are death marches that never complete on time.
This one is more easy to understand, even for non-technical people trying to assess the team. How many outages does the service suffer from? How long does it take you to recover from them? Outages should be infrequent, even for younger companies, and when they happen with regularity finding and exterminating their root cause should be done urgently. A team that gets used to be in firefighting mode continuously can take a lot of time to get out of that habit.
Similarly, you should be able to track your customers’ satisfaction with the quality of the product and the number of new bugs and defects that end up being introduced after every release (or weekly, in case you’re doing continuous deployment). For extra credit, you can also track the amount of rework that’s part of your development cycles, such as the number of issues that are found when code reviewing pull requests and the extra time that’s added due to that. You should aim to see rework going down in cohorts—new employees tend to have more than those who have been with the team for years. If rework is high across the board, you have quality or communication issues to tackle.
If the team’s velocity and quality are good—they deliver what they committed to regularly and with satisfactory stability—should you be content? First, I’m always wary of getting used to low standards. I call that Stale Velocity.
But digging even deeper, let’s assume that the team’s velocity is fine. Is that the point where you can finally feel like you’ve reached the holy grail? I’ll be honest, you’d be doing better than 80% of the engineering teams out there, so some celebration would certainly be appropriate. And still, this kind of feels to me like we’re really happy about doing what we’re supposed to be doing.
To put it another way, if you were holding a performance review for your engineering organization, and it delivered on the roadmap as promised, wouldn’t you just say it has “met expectations?” What happened to trying to exceed expectations? Yes, it’s hard. Yes, it’s even harder when there’s a pandemic, and we’re fighting to get things done. But, in general, our strategy and aim shouldn’t be merely to create an average team. That’s not a good enough reason to get out of bed (well, at least not for me).
That extra bit is about doing things that are innovative and novel. Helping the entire organization become better by acting as coders without borders. My most common example is not obsessing over tech debt and creating Tech Capital: technological solutions that serve as force-multipliers for R&D and the company as a whole. Can you spot places where a couple of engineers and two weeks will enable customer success to self-serve half of the common issues? Maybe create an internal insights framework that allows Product to iterate faster without being hand-held by engineers or analysts for most of their questions? These are the kinds of things that set the best engineering teams apart, resulting in a tangible competitive advantage. It’s your overall impact on the business.
Again, measuring these is not trivial, but even finding a couple that you track will make you a lot smarter when you need to assess how things are doing and decide on your future plans. To get even more information about it, be sure to subscribe below and get updates about The Tech Executive Operating System and the free upcoming webinars that will go in-depth about some of the above.
Get a sample chapter
Get the best newsletter for tech executives online, along with a free sample chapter of The Tech Executive Operating System 📖. Tailored for your daily work. Weekly, short, and packed with exclusive insights.