I manage a team of 7 engineers. Every two weeks, we close out a sprint, and somewhere in the retro someone mentions velocity. It went up. It went down. It stayed flat. And every time, the conversation that follows is almost completely useless.
Velocity is the most dangerous metric in engineering management. Not because the math is wrong, but because it gives everyone in the room the false confidence that they understand what happened. They don't. You don't. I didn't, for years.
Here's why I stopped caring about velocity, and what I track instead.
Why Velocity Is Misleading
1. It rewards point inflation, not output
The moment velocity becomes visible to anyone outside your team — leadership, product, stakeholders — the incentive structure breaks. Points start drifting upward. A task that was a 3 last quarter is a 5 this quarter. Nobody is shipping more. The chart just looks better.
I've watched this happen on my own team. An engineer once pointed a routine config change as a 5 because "there's risk involved." There wasn't. But our velocity that sprint looked great, and nobody questioned it because the number went up. That's not a metric. That's a collective hallucination.
2. It tells you nothing about flow
A team can have high velocity and still be deeply unhealthy. Velocity doesn't distinguish between a sprint where work moved smoothly through the pipeline and one where 80% of the points were closed in a panicked two-day push at the end. Both show the same number. One of them is sustainable. The other is a team burning out in slow motion.
Velocity is a batch measurement applied to a flow problem. It's like measuring a restaurant's success by counting how many plates they serve per night without asking how long customers waited or how many orders came back to the kitchen.
3. It's trivially easy to game, even unintentionally
Engineers aren't malicious about this. But when a manager presents velocity trends to leadership, the team internalizes that the number matters. They start pulling in small tasks to pad the sprint. They split stories not because decomposition helps, but because more tickets equals more points equals a better chart. The metric becomes the goal, and the actual goal — delivering valuable software predictably — gets lost.
What to Track Instead
After years of staring at velocity charts that told me almost nothing actionable, I shifted to four metrics that actually change how I manage sprints.
Cycle Time
How long does a ticket take from "in progress" to "done"? This is the single most revealing metric for engineering team analytics. A team with a median cycle time of 2 days and a 90th percentile of 4 days is healthy. A team with a median of 2 days and a 90th percentile of 14 days has a bottleneck — probably code review, probably environment-related, probably one engineer carrying a disproportionate load.
Cycle time distributions tell you where work gets stuck. Velocity tells you it eventually got unstuck. One of those is actionable.
Spillover Rate
What percentage of committed work carries over to the next sprint? This is the metric that leadership actually cares about, even if they don't know it yet. They don't care about points. They care about whether the team does what it says it's going to do.
A team with consistent 15-20% spillover has a planning problem. A team with erratic spillover (2% one sprint, 40% the next) has a scope management problem. Either way, spillover tells you something real about predictability — which is what velocity pretends to measure.
Workload Distribution
Are tickets spread evenly across the team, or is one engineer carrying the sprint while another is blocked? I've had sprints where velocity looked normal, but one engineer closed 60% of the points. That's not a healthy sprint. That's a bus factor problem hiding behind a good-looking number.
Tracking workload distribution per sprint forces you to confront uncomfortable truths: who's consistently overloaded, who's consistently underutilized, and whether your sprint planning is actually distributing work or just distributing tickets.
Completion Predictability
Can you reliably say "we committed to X and delivered Y"? Plot your committed-vs-completed ratio over time. A team that commits to 40 points and delivers between 36 and 44 consistently is a predictable team, regardless of whether that number is going up. A team whose delivery swings between 25 and 55 against a 40-point commitment is unpredictable, and no velocity trend line is going to fix that.
Predictability is what builds trust with product and leadership. Not speed. Trust.
How to Make the Shift
You don't have to throw velocity out overnight. But stop presenting it as the headline metric. Move it to an appendix slide. Replace it in your sprint dashboard with cycle time and spillover rate. When someone asks "how's the team doing?" answer with "we're completing 92% of committed work with a median cycle time of 2.5 days" instead of "velocity is 47."
The first time you present these metrics instead of velocity, someone will ask where the velocity chart went. Explain that velocity measures effort estimation, not team performance. Then show them the spillover trend. That conversation alone is worth the switch.
If you're building your own sprint dashboard or looking for one that already surfaces these metrics, I built CurlyFry.ai specifically because I got tired of extracting this data manually from Jira every sprint — it tracks cycle time, spillover, workload distribution, and completion predictability out of the box.
The metrics you track shape the conversations you have. And the conversations you have shape how your team works. Velocity gave us the wrong conversations for a long time. It's time to track what actually matters.
What's the most misused metric on your team?