Tell Me How You Measure Me

A practical view of software engineering metrics

Francisco Trindade
5 min readMay 18, 2020

In one of my last posts, I’ve mentioned metrics as a tool I use to manage both up and down. Since then, I got a few questions about which metrics I usually track and how I use them. Metrics are an essential part of engineering management for me, so here are the details.

Why Metrics?

Besides the usual “you can’t manage what you don’t measure”, metrics are becoming more relevant in the industry, especially after the release of Accelerate.

There is an opportunity to use metrics more in engineering management. I believe this shift started as the industry focused on managing individuals over teams, but that’s yet another topic for another post. When well applied, metrics can be used to influence behavior and positively align the team, both the individuals and the group.

Here are some examples of how:

To bring help. I have been an engineer and have seen many engineers work. Getting stuck in rabbit holes is a thing. If you understand your average cycle time, you can pick outliers as they are happening. It allows you to intervene with support and help when people need it.

To improve focus. All teams I have worked with are riddled with interruptions. They could be defects, legacy platforms, or special projects that are happening on the side. Measuring where your time is going can help you navigate distractions better.

I’ve recently analyzed one of my teams over six months to identify our focus. Turns out, we had spent 50% of our time on projects that didn’t align with our objectives. We were busy working on maintaining legacy systems and executing different special projects. Hard to hit those goals when half your efforts are misguided.

To improve efficiency. When you associate cycle time to specifics parts of the system, you can pick spots that require attention. Understanding maintenance hotspots is valuable information in support work. It helps you decide if it’s worthwhile improving a system or leaving it as it is.

I have used defect data to both manage up and down in different situations. Up when asking for time to refactor a specific broken system, and down when aligning with engineers on why we should/should not refactor a system.

What to Measure?

Once you know how tracking can help, the next question is: what should you track? I will start by saying that my suggestions below are not exhaustive. Below is a set of measurements I have found useful.


The basic one, if you use story point estimates. I usually measure iteration by iteration and the average of the last three iterations. Given I’m not a fan of strict sprint work, I’m less worried about what happens each week and more with the trend.

Apart from noticing and discussing when considerable variation happens from one iteration to another, the primary purpose of velocity for me is to allow for better predictability.

Velocity week by week for two different teams

Cycle Time

I measure the cycle time distribution in a per point average across all features, and for each story size (1, 2, and 4 in our case).

It makes trends visible (are we getting faster or slower in general). I also use it to spot outliers and reflect on them. When outliers happen, you can analyze if there are possible improvements to be made, both in execution and estimation.

Cycle time averages per day and size

Time spent on Features, Chores & Defects

Measurement of aggregated cycle time percentages for each type of work. I have found it useful to manage up and show why specific results happened.

For example, in the two first weeks of February, we had a tight market deadline and delivered under pressure. The consequence becomes apparent when looking at the percentage of defect work in the following two weeks. We dipped below acceptable quality standards.

Time spent on Projects

Same metric as above, but segmented based on projects. Helpful to understand where the team effort is going and reflect on it.

Defect Criticality

Defects reported and fixed, and a score based on the number and criticality of issues reported in a specific week. This metric is a recent addition and has been useful in understanding the quality we are delivering week to week.

How can you obtain this data?

As you can notice, the metrics above are all extracted from our management software and plotted in Excel. Other off-the-shelf tools can automate metrics and visualization with the software you’re already using.

It is not an exhaustive list, but a few options that I have looked at. I haven’t used any of them, though, so I can’t provide more detailed information.

Nave: Kanban metrics extracted from your management software. It provides some of the metrics mentioned above, plus others. It seems like a useful option, although I haven’t tried it yet.

Pinpoint: The one closest to what I was looking for — focused on team management and insights with metrics from your management software and source control. Adds AI to provide predictions based on your past data.

Velocity: Team insights based on source control analysis. Interesting information, but based on Pull Requests, which makes less sense to me.



Francisco Trindade

Born in Brazil. Living in NY. Engineering at Braze. Previously MealPal, Meetup, (co-founder) and ThoughtWorks.