This is interestingly different from the technique that some use to show scope added during an episode, with a stepped baseline. Here, additional work is accumulating, invisibly, inside your code. Work that you will have to do at some point to get a releasable increment. This has the effect of making a burn-down line more shallow than it appears, suggesting that there will be either unanticipated under-delivery of scope or (worse yet) a need to slip delivery.
If the causes of poor internal quality are not rectified, then this effect will repeat, and over successive episodes the team in question will get slower and slower (or deliver less and less)Talking about this with Karl Scotland and Joseph Pelrine in the bar afterwards we tossed around the idea that this shows internal quality (traditionally a hard thing to measure) to be something like the first derivative of project velocity with respect to time.
And now to stretch the metaphor to breaking point. For that to make sense quantitatively it seems as if what we're really saying is that if the internal quality Q is less than some threshold Qcv (the quality of non-decreasing velocity) then the velocity V will decrease over time:
∂V/∂t∝ Q-Qcv
Well, it seems reasonable that this will hold for low quality, when Q - Qcv is negative. But what about when Q - Qcv is positive? Is it possible to take a team that is writing code at the level of quality required for non-decreasing velocity—that is, not accumulating hidden extra work—and then increase velocity by increasing quality?I think it is.
I think that a team can push really hard on internal quality and have it turn out that there is less work to do than they thought to get finished.
And maybe that's obvious—and maybe it isn't—but certainly I now feel as if I have a much better handle on how to explain to someone why (as the Software Craftsmanship folks say) the only way to go fast is to go well.