I'm going to let the dust settle on my recent posting on patterns, and then do a follow up—some interesting stuff has come out of it. For that, I think, I need a bit of supporting material some of which is here.
Recently this little gem resurfaced on reddit, which prompted a certain line of investigation. Normally I spread a lot of links around my posts[*] partly as aides memoir for myself, partly because my preferred influencing style is association, and partly because I believe that links are content. But I really want you to go and look at this cartoon, right now. Off you go.
Back again? Good. This "learning curve" metaphor is an interesting one. Before getting into this IT lark I was in training to be a physicist so curves on charts have are strongly suggestive to me. I want them to represent a relationship between quantities that reveals something about an underlying process. I want the derivatives at points and areas under segments to mean something. What might these meanings be in the case of the learning curve?
Most every-day uses of the phrase "learning curve" appeal to a notion of how hard it is to acquire some knowledge or skill. We speak of something difficult having a "steep" learning curve, or a "high" learning curve, or a "long" one. We find the cartoon is funny (to the extent that we do—and you might find that it's in the only-funny-once category) because our experience of learning vi was indeed a bit like running into a brick wall. Learning emacs did indeed feel a little bit like going round in circles, it did indeed seem as if learning Visual Studio was relatively easy but ultimately fruitless.
But where did this idea of a learning curve come from? A little bit of digging reveals that there's only one (family of) learning curve(s), and what it/they represents is the relationship between how well one can perform a task vs how much practice once has had. It is a concept derived from the worlds of the military and manufacturing, so "how well" has a quite specific meaning: it means how consistently. And how consistently we can perform an action is only of interest if we have to perform the action many, many times. Which is what people who work in manufacturing (and, at the time that the original studies were done, the military) do.
And it turns out, in pretty much all the cases that anyone has looked at, that the range of improvement that is possible is huge (thus all learning curves are high), and that the vast majority of the improvement comes from the early minority of repetitions (thus all learning curves are steep). Even at very high repetition counts, tens or hundreds of thousands, further repetitions can produce a marginal improvement in consistency (thus all learning curves are long). This is of great interest to people who plan manufacturing, or other, similar, operations because they can then do a little bit of experimentation to see how many repetitions a worker needs to do to obtain a couple of given levels of consistency. They can then fit a power-law curve trough that data and predict how many repetitions will be needed to obtain another, higher, required level of consistency.
Actual learning curves seem usually to be represented as showing some measure of error, or variation, starting at some high value and then dropping, very quickly at first, as the number of repetitions increases.
Which is great is large numbers of uniform repetitions is how you add value.
But, if we as programmers believe in automating all repetition, what then for the learning curve?
[*] Note: for those of you who don't like this trait, recall that you don't have to follow the links.