So, Allan Kelly has picked up on something that I kind-of pretty much did say at XpDay this year—but not quite. I certainly am "part of the UK Agile set", especially if we emphasize the "UK" bit. What I'm not is part of the gang of independent consultants in London who are many of the high-profile early adopters in the UK, and it's true that I never worked for Connextra or any of the other highly-publicized, London-based "Agile" shops. Why is this worthy of note? Only because it relates to a sign of un-wellness in the community that I perceived at the session where Allan heard me say that. It was a double-header, a somewhat strange work-shoppy type of thing, followed by a panel. It was while speaking on this panel that I said what Allan noticed.

The point that I was, perhaps unsuccessfully, trying to make was that the disenchantment with the current state of the Agile world is perhaps more to do with the geographically constrained, hot-house world of ex-Connextra, ex-another place, folks circulating around the same bunch of clients in London than with anything wrong with the community at large.

In particular, this idea that any "mojo" has been lost, or that by accommodating corporate culture in amongst Agile adoption some "compromise too far" has occurred seems very off base to me. We pretty much agreed on the panel that the question "Have you compromised your Agility?" is a silly one, since "Agile" is primarily a marketing buzzword and it labels what is pretty much a bag of platitudes. How can we tell that these are platitudes? Well, they don't help us choose between reasonable alternatives: the hardest-core hard-core SSADM, PRINCE-2 wonk around, if asked in those terms, would probably tell you that working software is more valuable than comprehensive documentation (and probably add "you idiot" under their breath). They might not necessarily behave in a way aligned with that judgment mid project, that that's a whole other story. Fretting about whether or not you've compromised a platitude doesn't seem like the way forward.

There was a lot of talk about "Excellence", too.

And all of this (along with "wither the Agile Alliance?" and "what can we do about Scrum?") may seem like a topic of crucial import, if you spend a lot of your time inside the echo-chamber. Are we Agile? No, I mean, are we really Agile? Truly? Are we Excellent? Is this the real, true, expression of the Principles? I've seen long-lived Agile teams tie themselves in knots (and tear themselves apart, and get—rightly—caned by their business sponsors) over this sort of self-absorbed stuff.

Now, I have in mind a team I know that adopted some (not all) Agile principles. Are they purely Agile? No. Are they living the dream? No. Did they have to make compromises between textbook Agile and their corporate culture to get where they are? Yes. Might they do better if they hadn't to? Yes. But...

Are they three times more productive than they were before? Yes! Is the internal and external quality of their system hugely greater than before? Yes! Do their management revel in being able (for the first time ever) to believe, trust and make intelligent decision based on their status reports? Yes!

I am most reluctant to embrace, in fact I absolutely repudiate, a model of what it means to be Agile that demands that I call them failures because their (ongoing) improvement required compromise with their corporate culture. I'm not terribly interested, these days, in fine-tuning some on-going Agile adventure. I'm interested in taking what are now increasingly well-tried techniques and using them to get the big wins, the big increases in quality and productivity that so many shops are crying out for.

Architecture is contextual, but don't play dumb

A great post from Jacob on the contextual nature of architectural choices. Shock news: he doesn't think that dependency injection frameworks are right for him and his business.

He makes a good case for in-house development being the place where YAGNI is the primary principle, at least so far as the choice to spend effort (= money) on technology choices like that. I sympathize: I've seen in-house teams (with a declared adoption of XP, no less) get into huge rows with their business sponsors because their devotion to a certain architectural stance cost them productivity. They burned up man-decades on trying to get the damn thing to work, ended up with an incomprehensible codebase and then, because it was the right thing, started all over again, only with a determination to get it right this time. IIRC, they eventually ended up implementing this framework three times. Productivity went down and down. The "coach" and "customer" couldn't be in the same room at the same time. So sad. Key to this falling-out was the idea that the (paying) customer isn't entitled to an opinion about how the work is done.

Meanwhile, a side discussion at one XPDay session this year touched on the question of taking into account a need that the customer has told you they have, but that they haven't given a high enough priority to for it to be in the current release. It would be wrong, according to one understanding of YAGNI, to implement this release in a way sympathetic to that upcoming goal. But why? If the customer has told you that some need is coming (soon, that is, as in "next release", and not "in the next five years"), and this lets you make a smarter choice about implementing the nearer-term goals...well, it would be foolish to ignore that information, wouldn't it?

YAGNI means "don't spend six months building a framework to support six days development", it doesn't mean ignore highly certain knowledge about the immediate future direction of the system.

Erlang for Finance

So, the forthcoming "Hardcore Erlang" book just got a lot more interesting.

The example application (which, apparently, will really get built) will now be the core of a stock exchange. This is very exciting to me. Recently I made a career change from spending most of my time working with telecoms systems to spending most of my time with financial systems. And I have been flabbergasted at the time, expense and painful tears of frustration that go into building financial systems, particularly trading systems, given that their basic function is so simple. A lot of this seems to be because the "enterprise" technology stacks turn out (oh the irony) to be dramatically ill-suited to the task.

These systems receive messages and emit messages, perhaps in different formats, perhaps onto a different transport. They do various kinds of fairly simple processing on messages, typically an "enrichment" of a message with some extra data looked up from somewhere. They filter out some messages. And so on. They are, in point of fact, very very similar to switches, bridges, routers and firewalls. They even have similar soft real-time, high availability, high throughput characteristics.

Over in the telecoms world, for some strange reason they don't build routers out of J2EE stacks. They don't handle voice packets with XSLT. Maybe there's a lesson there for someone. And maybe having a stock exchange (or something like out) out there written in the kind of technology that routers are written in might help unwedge a few conversations. I live in hope.

Exemplary Thoughts

So, I was asked to write up the "lighting talk" on examples and exemplars I gave at Agile 2007. That was a short and largely impromptu talk, so there is some extra material here.


It used to be that botanists thought that the Sugar Maple, the Sycamore and the Plane trees were closely related. These days they are of the opinion that the Sycamore and Sugar Maple are closely related to one another, but the Plane is not to either. This is one example of the way that our idea of how we organise the world can change.

As it happens, this confusion is encoded in the binomial names of these species: the Sugar Maple is Acer saccarum, the (London) Plane tree is Platanus x acerifolia, while the (European) Sycamore is Acer pseudoplatanus. Oh, and if you are a North American then you call your Plane trees "Sycamores" anyway. And further more, not one of these trees is the true Sycamore: that's a fig, Ficus sycomorus.

Botanists and zoölogists are the masters of classification, but as we see they have to modify their ideas from time to time (and these days the rise of cladistics is turning the whole enterprise inside out)


A well-known dead Greek laid the foundations of our study of classification about two and a half thousand years ago, in terms of what can be predicated of a thing. It all seemed perfectly reasonable, and was the basis of ontological and taxonomic thinking for many centuries. This is interesting to us who build systems, because the predicates that are (jointly) sufficient and (individually) necessary for an thing to be a member of a category in Aristotle's scheme can be nicely reinterpreted as, say, the properties of a class in an OO language, or the attributes of an entity in an E-R model, and so forth. All very tidy. One small problem arises, however: this isn't actually how people put things into categories! It also has a terrible failure mode: as we can see from all this "acerifolia" and "pseudoplatanus" stuff in the tree's names, the shape of the leaves was not a good choice of shared characteristic to use to classify them. It is of this mistake (amongst other reasons) that the unspeakable pain of schema evolution arises.

The Greeks, by the way, already knew that there were difficulties with definitions. After much straining between the ears, an almost certainly apocryphal story goes, the school of Plato decided that the category of "man" (in the broadest sense, perhaps even including women and slaves) was composed of all those featherless bipeds. Diogenes of Sinope ("the cynic") promptly presented the Academicians with a plucked chicken. At which they refined their definition of Man to be featherless bipeds with flat nails and not claws.

In 1973 Eleanor Rosch published the results of some actual experiments into how people really categorise things, which seem to show that the members of a category are not equal (as they are in the Aristotle's scheme), a small number of them are dominant: Rosch calls these the "prototypes" of the category. And what these prototypes are (and therefore what categories you recognise in the world) is intimately tied in with your experience of being in the world. And these ideas have been developed in various directions since.

One implication of the non-uniformness of of categories is that they are fuzzy, and that they overlap. The import for us in building systems is that maybe the reason that people have difficulty in writing down all these hard-and-fast rules about hard-edged, homogeneous categories of thing as many requirements elicitation techniques want is because that's just not a good fit for how they think about the world, really.


But perhaps examples do. Examples can be extracted from a person's usual existential setting which means that they can be more ready-at-hand than present-at-hand. This is probably good for requirements and specifications (it's not universally good: retrospectives force a process in which one if usually immersed to be present-at-hand, and this is good too). Also, people can construct bags of examples that have a family resemblance without necessarily having to be able to systematize exactly why they think of them as having that resemblance. This can usefully help delay the tendency of us system builders to prematurely kill off options and strangle flexibility by wanting to know the nethermost implication and/or abstraction of a thing before working with it.

And maybe that's why example-based "testing", which is really requirements engineering, which is really a communication mode, does so much better than the other.

I'm proposing a session on this very topic for Agile 2008. I encourage you to think about proposing a session there, too.


The "test" word in TDD is problematical. People are (rightly) uncomfortable with using it to describe the executable design documents that get written in TDD. The idea of testing has become too tightly bound to the practice of building a system and then shaking it really hard to see what defects fall out. There is an older sense of test, meaning "to prove", which would help but isn't current enough. Fundamentally, though, these artefacts are called tests for historical reasons (ie, intellectual laziness). One attempt to fix this vocabulary problem has the twin defects of going too far in the direction of propaganda, and not far enough in the actual changes it proposes.

In any case, I'm more interested in finding explanatory metaphors to help people use the tools that are currently widely available and supported than I am in...doing whatever it is to people's heads that the BDD crowd think they are doing. Anyway, I've found that it's a bit helpful to talk about test-first tests being gauges (as I've mentioned in passing before. Trouble is that too few people these days have done any metalwork.

A Metaphor Too Far

So, the important thing about a plug gauge or such is that it isn't, in the usual sense, a measuring tool. It gives a binary result, the work piece is correctly sized to within a certain tolerance or it isn't. This makes, for example, turning a bushing to a certain outside diameter a much quicker operation than it would be if the machinist had to get out the vernier micrometer and actually measure the diameter after each episode of turning and compare that with the dimensioned drawing that specifies the part. Instead, they get (or assemble, or make) a gauge that will tell whether or not a test article conforms to the drawing, and use that.

And this is exactly what we do with tests: rather than compare the software we build against the requirement after each development episode, we build a test that will tell us if the requirement is being conformed to. But so few people these days have spent much time in front of a lathe that this doesn't really fly.

But, flying home from a client visit today my eye was caught by one of those cage-like affairs into which you dunk your cabin baggage (or not). It would be far too slow for the check-in staff to get out a tape measure, measure your bag, and compare the measurements with the permitted limits. So instead, they have a gauge. From now on (until I find a better one), that's my explanatory metaphor. Hope it works.

Agle 2007

Next time, I'll be writing more about the sessions I attended, and that will have a rather more up-beat tone, since they were pretty good. This post contains my general observations about the conference, though, and they are not quite so good.

So, a somewhat belated write-up of Agile 2007, since I've been as sick as a dog pretty much since I got back from it.

I'm sure that had something to do with spending a week in a refrigerated dungeon. It wasn't clear, upon arrival, where the conference was going to be; the hotel didn't look big enough. Turns out that the conference centre is in the basement. And the sub-basement. And the floor below that. No natural light, no clocks, no external sounds or environmental cues. This plus jet-lag contributed to a disconnected, floating feeling. That and the mammoth programme.

1100 people attended this year, and it is is seemingly the belief of the conference committee that this requires a very full programme to keep them all busy all the time. Individuals and their interactions, eh? No, no, no, session:coffee:session:lunch:session:coffee:session:awkward evening hi-jinks, that's the way. Apparently, next year is going to be even bigger.

And this programme (and session materials) has to be fixed far, far in advance. Responding to change, eh? There's nothing like eating your own dogfood, and is nothing like eating your own dogfood. Trouble is, to get enough sessions to fill that many slots you have to accept a lot of sessions, which means that the bar is necessarily lower than it might otherwise be.

There is no there there

The biggest problem with the venue was that, being spread over three (four, if you count the main hotel atrium) floors it had no identifiable centre, no focus for circulation, so fewer opportunities for the ad-hoc meetings that make these shows so valuable. The nearest thing to a "crush" was the CWAC ("Conference Within A Conference"), a rather half-hearted OpenSpace-ish sort of affair in the most remote part of the centre. More of a "Conference Tacked On The Side Of A Conference" Apparently, the organizers of the CWAC chose that room themselves on the basis that it was broken up by pillars, of which I can see the sense. But really, the committee should have had the intestinal fortitude and spinal rigidity to say "no, we'll figure something out with the pillars, but the place for for the open space is at the heart of the conference."

General observations: fewer "rock stars"; more vendors; more women; more vendors; more non-programmers; more vendors; good international presence; more vendors.

Cheap at Half the Price

Did I mention the vendors? Some years ago I was speaking at Spa, and amongst other things was asked to join in an evening entertainment whereby a bunch of us had to give a speech both for and against a topic. I drew "extreme programming" (which was still a hot topic at the time ;) and one of the points against it that I made was that while it's all well and good that Beck tells us
Listening, Testing, Coding, Designing. That's all there is to software. Anyone who tells you different is selling something.
but in fact almost everyone in the room was selling something. And furthermore, "no-one", I said, "is going to get rich charging commission on the sale of these things" and I threw the handful of index cards that held my notes for the talk to the ground. Martin Fowler I noticed was nodding vigorously at that point.

Well, these days there are all these folks who very definitely are selling something: great big honking lumps of tool intended to "simplify" the planning, management and execution of Agile projects.

Like Cheese?

So what does that all add up to? I think it adds up to a community that has become "mature". Maturity is one of those concepts that the petit bourgeoisie use to rationalise their fear and loathing of freedom and imagination. This has its up side: a "mature" flavour of Agile is going to be a much easier sell to a large range of large corporate clients (I include government departments under this heading) than the Zen/Hippy flavour, but it doesn't have anything like the capacity to be a radically dynamic force for truth and light in the world. I looked briefly into Mary Poppendieck's session, which sounded very interesting but was completely full and I didn't want to stand for 2 hours. I'm beginning to feel that there's something slightly creepy about the rush to embrace Lean principles in the agile community, because Lean is all about maximising throughput by minimizing waste. Now, most development teams need to improve their production, it's true, but isn't there a bit more to life than that?

Brian Marick was handing out posters listing the items that he feels are missing from the Agile Manifesto, which are: discipline, skill, ease, joy. Presumably, in that order. Doesn't that sound like a good deal?

Complexity at Agile 2007

This thread came from here, and will be continued...

Just walked out of the first keynote at Agile 2007: I don't need to hear about an amature mountaineer's love-life (yes, really) no matter how "inspirational" it's supposed to be. What have we come to?

Anyway, that gives me time to get ready for the latest adventure in test-driven development and complexity.


Well, that didn't go quite a smoothly as I'd hoped. The room I'd been given was in some sort of bunker under the hotel and while it did technically have a wireless network connection the feeble signal that we had was unable to support as many laptops as there were in the room. So it was a bit of a struggle to get folks set up to use the tool.

However, some people did manage to gather some interesting data of which I hope that some will be shared here. Certainly, folks have found some interesting correlations between the numbers that the tool emits and their experiences working with their code. Especially encouraging is that folks are applying the tool to long lived codebases of their own and looking at historical terends. These are the sorts of stories that I need to gather now.

Note: the tool is GPL (and the source is in the measure.jar along with the classes). Several folks are interested in C# and Ruby versions, which I'd love to see and am happy to help with.

I sat down with Laurent Bossavit and we experimented to see if we could get equally interesting results from looking at the distribution of size (ie, counting line ends) in a codebase, and it turns out not. Which is a shame, as that would be easier to talk about, but is also what I expected, so that's kind-of OK.

A lot of good questions came up in the session, pointers to where I need to look next: Is there a difference between code written by solo programmers vs teams? Do you get the same effect from using functional (say, Fit style) tests as from unit tests? Is there any correlation with test coverage? Exactly what effect does refactoring have on the complexity distribution? Thanks all for these.

Laurent seemed at one point to have a counterexample to my hypothesis (which was a very exciting proposition), code that he knew had been done with strong test-first, but had a Pareto slope of about 1.59 (and an amazing R^2 of 1.0). But on closer examination it turned out that the codebase was a mixture of solid TDD code that by itself had a slope of 2.41, and some other code (which we had good reason to believe was a, poor and b, not test-driven) that by itself had a slope of 1.31

Unfortunately, I wasn't able to make it to the research paper session where this thing was discussed, or this. But I need to catch up with those folks. In particular, the IBM group report that with TDD they don't see the compelxity of the code increase in the way that they expect from experience with non TDD projects.

What "Ivory Tower"?

Strange rumblings on the Ruby front.

I recall walking through the grounds at Imperial, heading for the first XpDay in the company of Ivan Moore. As we passed by he noted that there was an ivory tower for me to move into. Now, I am prone to a little bit too much theoretical musing on things, it's true, but I though this comment was a bit rich coming from someone with a degree, MSc and PhD in Computer Science.

Anyway, this "ivory tower" notion is an curious one and seems to be very much a Yes Minister sort of thing: I pragmatically balance theory and practice, you pay too much attention to unnecessary detail, he lives in an Ivory Tower. The phrase has strong overtones of academic detachment. Strange, then, that this posting should suggest that Smalltalk was ever in one. Smalltalk was developed in an industrial research lab paid for by a photocopier company. An early (maybe the earliest) adopter of Smalltalk, where some of the folks responsible for spreading certain ideas from the Smalltalk world more widely worked, was an electronic engineering firm.

Currently (mid 2007) Smalltalk is (still) being used commercially in the design of gearboxes, it's being used in the pricing of hairy financial instruments, it's being used to do hard stuff for money. I've even earned money for writing Smalltalk myself, within the last 5 years.

Lisp, now, Lisp did have to escape from an ivory tower. And that didn't work out too well, so it tried to get back in. But the door had been closed behind it. Ouch.

Well, if, as is suggested, "Ruby is a unix dialect of Smalltalk" then it would seem that being unixified is not such a completely good thing. Really, spelling blocks with {}s instead of []'s is neither here nor there (although having that invisible magic parameter for them is really bad). the theory is that being being now outside the "VM" (I think that Giles probably means "image") then Ruby-is-Smalltalk plays much better with others. That's true enough. Like certain other refugee systems taking shelter in the unix world, Smalltalk really wants to be your whole world. But we kind-of know, and certainly the unix way is, that that's not a great model.

What's a shame, though, is that if Ruby is Smalltalk then it is Smalltalk with the most important thing taken out: life. And life comes with the image, and not from objects (or worse yet, merely the instructions for buliding the objects) being trapped in files. Sorry, but that's the way it is. So, until there's a ruby environment as lively as a Smalltalk image, with all its browsers and such, I can't see the ruby-is-smalltalk metaphor doing anything but spreading confusion and disappointment.

Certification, testing, RDF, semantics, a post-lunch haze

Reg "no relation" Braithwaite suggests here a certification for programmers, to cover:
  • Continuous integration
  • Black box testing
  • White box testing
  • Design for testing
  • Probabilistic testing
  • etc. [you get the idea]

Basically, all the things that demonstrate that your system is sound.

Most interestingly, Reg gives this caveat:

Like everyone else in favour of certification, I have my own ides about what skills and knowledge you need to demonstrate to get your certification. Unlike everyone else, I think I would fail my own certification if I didn’t do a whole lot of studying. [my emphasis]
Smart. I have this suspicion that a lot of the angst that surrounds the sporadic talk of "Agile" certification comes from (an unwillingness to admit to) fear of not making the cut. Too many programmers have way to much ego invested in their own assessment of how brilliant they are, and the notion that someone is going to come watch them do their job and maybe say "d'you know what, this isn't very good" is absolutely terrifying to them. Of course, the industry is beset with certifications that aren't worth the mouseclick it takes to get them, and the industry is also beset with development managers who don't do their own hiring and allow HR departments to abuse certifications during recruitment. The Agile corner of the IT world is not alone in this.

But that's not what I want to write about today.

When you eliminate the impossible, whatever remains--however improbable--must be the truth

Over at enfranchisedmind Brian had something to say about Reg's post that segued from certification through testing to static typing. Ah yes, testing vs static typing. Brian presents a list of quite impressive qualities of a program that can be enforced by OCaml's static type system, even things like "That a Postgresql cursor can only be created and accessed with a transaction." That's pretty sweet.

But, you know, correctness of the code in those sorts of terms is only part of the story. And perhaps not even the important part. Brian pretty much says this himself:
One thing I have realized recently. And that is that our job [as programmers] is not to produce working code. It’s a heretical idea, but it really isn’t. Our job, as software developers, is to solve problems.
Emphatically yes. He also says:
The mantra here [..] is “make illegal states impossible to represent”. Or, to quote my old CS teacher, “if you don’t mean it do that, don’t make it possible to do that!” And more so than any testing, even unit testing, static types are on the front line for doing that.
Which is great!

Thing is, though, that Aristotle wouldn't like systems engineering much (he'd probably have liked programming, but that's a whole other thing), because merely having legislated out of existence all the illegal states we can think of doesn't mean that we're guaranteed to do the right thing. As in, the thing that the customer will be willing to pay us for.

And that brings me on to something I've been meaning to write down about RDF.

Mere Semantics

A friend of mine is pondering a career change to work in the field of "document management", and he called me the other day to pick my brains about the Dublin Core. Dublin Core is fine by me, because it's meta-data. Nice to have that in a uniform machine readable format. But then there's this sort of thing:
here we in essence have the reason [that duck typing doesn't "scale"]: one has to limit the context, one has to limit the objects manipulated by the program[...] Enlarge the context, and at some point you will find objects that don't fit the presuppositions of your code. [...] the solution [...] requires one very simple step: one has to use identifiers that are context free. [...] then they will alway mean the same thing, and so the problem of ambiguity will never surface.
The proposal, which appears to be a serious one, is to write code like this:
<> a owl:Class;
rdfs:subClassOf <>;
rdfs:comment "The class of ducks, those living things that waddle around in ponds" .

and so forth.

We'll draw a veil over the matter of possibly having a different notion of ducks that (in which case the dreaded UFO wobbles into view, a philosophical hub-cap suspended from a cognitive string), and consider instead what it means to imagine that this sort of scheme would capture the semantics of what it means for an object to be a duck.

I suspect that if one has studied a lot (but possibly not quite enough) of computer science then one ends up thinking that something like RDF is what "semantics" means: an arbitrarily far ramified graph of things joined together by relations (that are things, that are joined together...)

Disclosure: I have studied exactly 0 Computer Science. I have nil qualifications in the area. I mean, I've read this, and these, and so forth, but I've never gained a qualification in Comp Sci.

But I have built a few systems. And based on that experience, what I think the semantics of a fragment of program text is, is the change in the users' behaviour after the text is executed. And this is necessarily a localised, context-dependent thing.

Which brings us back to tests. Now, I don't much like the word "test" these days for the kind of executable document that gets written using *unit, or FIT or whatever. These days I tend to talk more about them in terms of being gauges. Anyway, the great thing about this sort of test is that it is explicit, operational, concrete and example- and exemplar-based. This is greatly at odds with the common understanding of "specification", which tries to have a sort of abstract, universal, infinitude to it but is excruciatingly hard to get right. The other approach is much easier to get right, and for good cognitive reasons, it turns out.

And this is why static typing, of whatever flavour of whizzyness, comes at the systems problem from only one, and I'd suggest the less valuable side. Prohibiting the wrong thing isn't enough, we must also encourage the right thing. And to do that we must write the right thing down. And the right thing is intimately tied up with the existential detail, of some individual in some situation at some time.

This is why I quite like Reg's proposed certification (which, by the way, I very much doubt I could get either). It isn't a certification for QA's, it's a certification for showing that you know what to do, and then showing that you've done it.

How web 2.0 will cost us money

I'm writing this on a 1GHz G4 Ti PowerBook with 1Gb RAM. It's so old, I finished paying for it some time ago.

Until about a year ago I had no desire to upgrade: I don't edit video, I rarely gimp huge images. It's travelled half-way around the world with me (and survived the plunge out of a 747 overhead locker with great aplomb). The machine compiles code of the size I can write by myself fast enough for me not to care about it, it can even do that while playing back music. It can just about drive a 1680x1050 monitor (so long as noting too visually exiting happens). But these days, browsing the web with this machine is increasingly painful as the 2.0 sites get more and more JavaScript intensive, and as that trend spreads to more and more sites. Try doing some general browsing with JS turned off and see how many plain old websites--not a social tag clound in sight--just don't work at all without it. This is a sad state of affairs.

I might add that editing this blog posting is slightly more painful than I'd like, firefox's cpu usage is peaking at about 50%, which is ludicrous.

When I started my professional programming career I was well pleased to have a SPARCStation 5 as my desktop machine. Check out those numbers: 110 MHz! You'll still see those boxes occaisionally today (they're very well built), hidden away in data centers running some admin demon or other. For a while Sun sold headless SS5's as web servers, imagine that. At the time the thought of a "super-computer" grade laptop like the pb I have here would have been laughable. And now it's a crying shame that all this capacity is being burned up in the name of this sort of thing (95% CPU), clever as it is. Which is why I salute Ted for this little investigation, and find this survey of the art rather dismaying for its implications.

Complexity Results from Spa 2007

The story came from here

Spa 2007 attendees saw this presentation, then ran the measure tool over a range of code and kindly recorded some of their observations in comments here. Many thanks to them.

Especially rewarding is Duncan's observation regarding the time-variance of the figures across a refactoring episode. Several attendees also confirmed my suspicion that there's something odd about language implementations.

The story continues here.


Matthew Skala published (some time ago) an excellent article highlighting some of the ontological confusion that arises when programmers' and ordinary people's thoughts overlap. I whish I could remember where I first came across the notion that a lot of the problems that arise when that happens (say, when trying to explore a customer's wants and needs of a new IT system) are due to the spectra of abstract <-> concrete being exactly backwards for programmers and real people. If you have a notion where that might have been, please let me know.

To illustrate: there used to be this subject called "Systems Analysis" that folks heading for programming jobs used to get taught. You could probably be a lot older than me and have been taught it, but not be much younger and not have. Which I suspect is why there was any interest at all in this book. To quote Nat Pryce,
Until I started working in "enterprise IT" I didn't realise that people didn't do this.
Well, back in the day I was taught with examples that pretty much sent the message that, say, a "bank account" was a terribly terribly asbtract thing, and an (as it was then) OMT box denoting a class representing bank accounts was a bit more concrete, and that a C++ class definition representing the OMT class was more concrete than that, more concrete still was an object that was an instance of that class. And most concrete of all was the bit pattern in memory that implemented that object. OK. Imagine the embarrassment of trying to explain this view of the world to someone with a crippling overdraft. "No, you see, a bank account is a terribly abstract thing..."

In space, no-one can hear you scream

Another exemplar: This page is clearly a labour of love, and quite impressive in itself. I'm not sure it does anything to make monads any more tractable, however. What it does do is express very clearly this interesting aspect of programmers' thinking, albeit through the reverse mechanism of trying to avoid abstraction.
  • "I shall use a metaphor of space stations and astronauts to cut down on the degree of abstraction"
  • "A space station is just a metaphor for a function"
  • "These astronauts could be anything, Americans, Frenchmen, dogs, gorillas, whales, whatever"
  • "The other thing is to note that is our space stations are typed"
and so on.

"A space station is just a metaphor for a function" But I have zero personal experience of space stations, whereas I do have personal experience of functions, so I struggle to find the former more concrete than the latter. Thinks: "the typed space station takes a whale in a space suit..."

I highlight this not to mock Eric (to whom more power and good luck), but because of the crystal clarity with which this language highlights the deep, deep strangeness of how programmers (including myself) communicate with each other as regards what is abstract, never mind with real people.

"What do you read, my lord?"

This sort of thing wouldn't matter so much if the notions of the world convenient to us as programmers didn't keep leaching into the real word. Here's a story that turns out to be about time, identitiy and state (more ideas that we have some sometimes fishy ideas about): I was in a particular bookshop, since gone out of business and this story might partially explain why, and I saw a copy of a Chris Ware book that I didn't have. The copy on the shelf was pretty badly beaten up, so I took it to the till and asked the woman there "do you have another one of these that's in better condition?"

Ware's books sometimes have the unusual feature that the barcode is on a wrapper and not on the book proper, and this copy had lost such wrapper. Well, she searched every surface of the book that she reasonably could to find a bar code to scan, but failed. She handed the book back to me with an apologetic shrug and said "I'm sorry, I don't know what this is". I'll repeat that: "I don't know what this is." A state of profound confusion for someone who works in the book trade, standing in a bookshop, with a book in her hand.

What was meant was something like "the IT system foisted upon me by the senior management of his retail chain will not allow me to perform any actions with this object because I cannot scan it". But what she said was "I don't know what this is" which is perhaps less explicit but more telling. The truth is that the IT system of that shop is not able to distinguish books. It likely has some sort of idea of how many items with a particular barcode have been scanned in, and how many scanned out and therefore how many may reasonably be expected to be found in the shop at any given time. But actual books are beyond it. In the world of this stock control system, rows in a database table are more real, more concrete, than the items on the shelves, and woe betide you if you try to do anything to the latter without the former. It seemes as if the history of this object had rendered it (in the context of the bookshop) without an identity. Remarkable!

And so the person working in the shop has been placed in a world where the tangible objects that are the essence of that business cannot be worked with for want of that key. A book, you see, is a terribly abstract thing...


So, there's this thing called the FizzBuzz test, which it turns out that a lot of people applying for "programmer" jobs can't pass. In a spirit of adventure, I had a go at it myself:
(require (lib "" "srfi")) ;iota
(define (fizz-buzz n)
(let ((multiple-of-three (= 0 (modulo n 3)))
   (multiple-of-five (= 0 (modulo n 5))))
(if multiple-of-three
   (display "fizz"))
(if multiple-of-five
   (display "buzz"))
(if (not (or multiple-of-three multiple-of-five))
   (display n))

(for-each (lambda (n)
       (fizz-buzz (+ 1 n)))
     (iota 99))
It took me a bit longer than the "under a couple of minutes" that Imran wants a good programmer to do it in (but this is not written on a piece of paper, this was written in DrScheme, and I ran it, and it works) but less time than the 15 minutes that "self-proclaimed senior programmers" sometimes take--which I suppose makes me a middling-good semi-senior programmer, which is about right.

The discussion of this on CodingHorror pulls together a lot of amazed reactions from various places, but none of them quite seem to grasp the nettle which is not so much that there a lot of really bad programmers around, but that there are a lot of IT professionals around who apply for "programmer" jobs even though their work experience doesn't actually include any programming.

This is a relatively new thing, and seems to be different from the old "data processing" vs "real programming" divide: I'll bet a grizzled COBOL walloper could polish off the fizzbuzz problem pretty quickly (and it would probably compile first time, too) But if your "IT" job consists, as many do these days, of using rich visual tools to script up interactions between enterprise components supplied by the tool vendor, or maybe filling in the templates used to generate the code that generates the code that gets scripted up by someone else, or any one of a number of other similar things then you certainly work at building IT systems, but what you do is not programming. And really, that's OK, kind-of. It isn't anything to be ashamed of, in and of itself.

The economic and cultural conditions that have caused such dreary jobs to exist and be worth having perhaps are something to be ashamed of, but this is not the fault of the people filling the jobs.


Fitted up

Anyway, back in the late 1990's when I was persuing a post-grad course in Software Engineering there was talk of "component foundaries", and the idea that there would be people like the toolmakers and machinists or yore working in these foundaries. And then there would be something like fitters who would assemble working systems out of those components. Please accept my apologies if you are reading this in a country that has not turned its back on manufacturing (except in certain specialized areas) and still know what I'm talking about. Being a fitter was a fine profession, back in the day, but no-one would hire a fitter as a toolmaker (although a fitter might aspire to be and retrain as, a toolmaker). This change now seems now to have happened in the IT industry, without being widely recognised.

Maybe we suffer from a sort of false democracy, a misguided anti-elitism in the industry. And maybe this is a capitulation to those who would commodify our work. Maybe we need to face up to these sorts of distinctions a bit more honestly and stop expecting that everyone who has anything at all to do with building systems will be skilled as a programmer, should be expected to be able to write code from scratch. It just may not be what they do. Ever. If you want to hire someone who can write code from scratch, the pool of "IT professionals" is not the only place to look, nor is everyone in it a candidate.



By the way, I do want to hire such people, and our hiring process for all technical positions involves a session of actual pair programming, at a workstation, writing real code that has to work, and be shown to work. And the problems we set are a damn sight more complicated that fizz-buzz. If you like the sound of being in the group of people who are successful at such a test, and if you live within, or would be willing to move within, commuting distiance of the inside of the M 25, then why not send me a CV?

More Real Engineering

Via 37 Signals this delicious quote, one of several on this page:
If a major project is truly innovative, you cannot possibly know its exact cost and its exact schedule at the beginning. And if in fact you do know the exact cost and the exact schedule, chances are that the technology is obsolete.
As the discussion on 37S suggests, the word "obsolete" is rather strong. But then, the chap quoted here was Director of the Lunar Module Programme at Grumman. Working on Apollo probably gave you a rather different idea of what was obsolete from that of the average person. Or even the average engineer.

But the essence of the quote is absolutely right. And this is a crucial difference between manufacturing and creative engineering. Which of these things you think software development most resembles will have a big impact on what you think a plan looks like, and what sort of things you think are reasonable estimates to use to build that plan. That's not to say that in the case of innovation all bets are off: the Apollo Programme did meet JFK's deadline for putting a man on the moon and returning him to Earth (just).

But the next time you are starting a development project, getting set for iteration zero, or the inception phase, or whatever you call it, think about which parts of the project are innovative, and which are business-as-usual. Hint: using RoR to render SQL conversations over HTTP this time round instead of J2EE is not innovation. That's just no longer attempting to do you job with one hand tied behind your back. Second hint: the innovative bits are the ones that the business sponsors are really exited about and that the architect (don't pretend you don't have one) thinks are going to be "interesting" and that the project manager has to have a stiff drink before contemplating.

There are better ways to deal with the uncertainty that comes along with the huge promise of innovation, and, well... less good ones. You owe it to yourself (and your business sponsor) to figure out which aspects of the project you should be doing which way. Oh, and if the answer is mostly business-as-usual you owe it to the business to ask why the project is being done at all.


Chatting down the Tuesday Club last night the notion came up that Java packages really don't carry their weight. The aren't modules, they do lead to the top of your source files being infested with clutter, they aren't units of deployment, they do cause endless hair-pulling about whether a class belongs in or would it be better in Life is too short.

What would be so bad about having all the classes that form your application in one package? It would be nice to have some tool support for filtering views onto that soup of classes, but why have those views (no reason to have only one) hard-coded in the source files?

Further discussion lead to the conclusion that you might want to have one named package, called something like "application.export", in which would be placed classes that are the subjects of external dependencies. You might have one other named package for your application, to avoid the Date problem. But really, no-one, no-one, is doing the thing that the com.yoyodyne.abomination.ghastly.verbose.madness pattern is meant to enable: guarenteeing that there will be no name clashed when your runtime pulls down a bunch of classes from a random place on the network. Think of all the gateways that exist between any .jar file you might think of and your runtime environment. How many opportunities to fix any such clashes exist between finding that the name conflict exists and getting a binary deployed?

Real Engineering

Glenn Vanderburg has put up a great article about the flaws in some of the simple-minded comparisons that sometimes get made between software development and "real engineering". I reminded of some of those shows put out by the Discovery Channel about gigantic civil (usually) engineering feats. I don't have a TV so I ony really see any when I'm travelling and want to zone out for a bit, but I always look for these shows. What I love about them is that, in amongst all the impressive toys and hurculean efforts there is drama. This is really why they make good television, almost more than the spectacle. And they have drama because these big engineering projects always go wrong. They overrun their budgets, miss deadlines, assumptions turn out to be false and invalidate plans, tools don't work as advertised, techniques turn out to be misapplied, and when you get right down to it, it turns out to be the collective inventiveness of the teams doing the work that makes the project a success. Real engineering. The difference between us inthe IT industry and those folks is not that we have spectacular failures and they don't, it's that they learn from theirs.

Simple pleasures vs the singularity

It's a shame that the title of this reddit post, through sloppy use of an implied universal quantification, allows for a cheap refutation. And a shame, too, that the chap in the original article asked his question of Kurzweil in such a way as to admit an answer focussing on the instrumentive.

I have to tell you, a lot of talk about the Singularity gives me the creeps. Most folks who talk about it at all do so in a mode suggesting that it is 1) inevitable and 2) desirable. I don't find this encouraging. But then, I'm not really a technologist's technologist, even though I make my living in IT, so a lot of more-technology-is-better-until-that's-all-there-is talk leaves me cold. And I think back to when I lived on a particularly lovely part of the south coast of England, and to weekends spend pottering about the New Forest (which, for the neophiles amongst you, was "new" in 1079 CE), walking hand-in-handwith my girlfriend, helping her two kids explore the interesting things to be found in woodland ponds, throwing balls for the dog, and so forth. What is the Singularity going to offer that's better than that? The Far Edge Party and it's ilk? I don't think so.

It's Alive! [flash of lightning, crash of thunder]

Steve Yegge, in a rather lengthy blog posting (as this one threatens to become, and with the good stuff at the end, too) mentions the characterists he believes characterise "great" software. I pretty much agree with them and here is that list, lightly glossed by me:
  1. Systems should never reboot [and so in orer to be useful]
  2. systems must be able to grow without rebooting [thus]
  3. world-class software systems always have an extension language and a plug-in system
  4. [as a bonus] Great systems also have advice, [...]hooks by which you can programmatically modify the behavior of some action or function call in the system
  5. [in order for 4 to be possible and 3 to be easy] great software systems are introspective
  6. [and] every great system has a command shell [so that you can get at the above easily and interactively, see below for the importance of this]

Wot no database?

Steve give examples of systems that exhibit various of these properties to varying degrees, but one whole class of system is missing from his list: RDBMSs. It's possible to get very much carried away with what relational datastores can do, and on the other hand, relegate them to being not much more than file managers. Either way, it's strange that Steve doesn't mention even one RDBMs in his lists of "favourite systems", since the industrial-strength RDBMs exhibit all of the properties that he ascribes to good systems (possibly for small values of "plug-in system"). Certainly they do much more so than, say, the JVM, which is on his list.

But then, when was the last time you saw someone with a terminal open doing real work by composing ad-hoc SQL queries (which is where RDBMs really shine)? Why have we, int eh IT industry, robbed our users of this?

Steve goes on to talk about rebooting as death, and muses about the concious/unconcious boundary and what it might mean to reboot a system on the concious side of it. Highly speculative, but then I did once have a conversation down the pub with someone who reported that someone else (sorry about this) was researching what it would take for self-aware AI to be benevolent towards us. An intruiging notion: those who subscribe to the idea of a creator seem always to assume that the creator is both infinitely superior to the creature and well disposed towards it (except for this one), or at worst indifferent. A self-aware, self-modifying AI might be expected to rapidly become superior to us. And if we're in the habit of killing its siblings as standard operating procedure...

But I digress. So does Steve, and I find that in amongst all the amature philosophising he kind of misses his own point. Towards the end he starts banging on static type systems in general and on Hindley-Milner in particular. This provoked the Haskell folks over on Reddit to show that Haskell can support the sort of systems he describes. Which goes to show how relatively uninteresting the kind of "life" that Yegge talks about is. Frameworks that support hot-swappable executables can be built in Haskell, of course they can. But doing so seems like a stunt.

Back to the Future

Thinking about persistance reminds me of an event: in a previous job I had for a time the title of "Head of Research" with a very open brief to investigate new ways of doing things. The business liked applications to be presented through a web browser so I built a prototype for one system that I had in mind using Seaside. One of the production developers looked at this prototype and asked me how it was persisted. It's persisted, I said, by never turning this box (kicks box hosting the prototype) off. That's not an acceptable solution for a production system, although something that simulates the same effect can be. And Ralph Johnson has this to say about that:

[clang!] just a moment... Oh, the irony: that blog is down (18 Jan 2007), the posting is here, and here's the google cache. Anyway, what Ralph says his own squeak list post that the blog post points to is:

I did this in Smalltalk long before Prevayler was invented. In fact, Smalltalk-80 has always used this pattern. Smalltalk programs are stored this way. Smalltalk programs are classes and methods, not the ASCII stored on disk. The ASCII stored on disk is several things, including a printable representation with things like comments that programmer need but the computer doesn't. But the changes file, in particular, is a log and when your image crashes, you often will replay the log to get back to the version of the image at the time your system crashed. The real data is in the image, the log is just to make sure your changes are persistent.

It's a shame that it has to be when your image crashes, not if, but never mind. This is a pointer to the big win that Yegge misses, in my opinion (and his pro-Haskell commentators, too) The interesting thing is not hot-swapping components without a restart. It's about being enabled to interact with computational objects ("object" in the broadest possible sense) in the same way you interact with physical objects: they should be ready-to-hand not present-at-hand.

A Smalltalk image is a long-lived thing (it's been claimed that some of the standard Smalltalk-80 image survives into contemporary Squeak images) but it has, along with a long lifetime, also a sense of liveness. The objects are just there, beavering away, or perhaps hanging out waiting for you to ask something of them. And they respond immediately. And Smalltalks have this sort of liveness built in by default, and assumed all the way down to a very low level. Working in such systems is a very different thing from working on or with others-and in my experience a much more productive and (the two go together) enjoyable one.

Smalltalk is not unique in this. Common Lisp implementations tend to be this way (and Scheme ones not, which is a shame). Self is this way. Subtext (not the one you're likely thinking of) is very much this way. And, to a limited extent, an Excel spreadsheet is this way. VisualAge for Java tried really hard to be this way, but this seemed to confuse the majority of that new breed of folks "Java Programmer".

Lots more systems should be this way too. Funnily enough, I believe that the best candidate around at the moment for getting this sort of immersed computation experience into the mainstream is JavaScript hosted in web browsers, if only more people would take it seriously.

Complexity and Test-first 3

The story came from here.

At XPDay last year I ran a session exploring some of the ideas about complexity and TDD that I've discussed here before. Attendees at the session got a very early copy of a tool, "measure", which calculates some of the statistics I'm intersted in. From that came some good feedback which I've incorporated into a new version of the tool, avilable here. Note to XPDay attendees: this is a much more informative bit of software than the one you have, I suggest you upgrade.

The session will be repeated at Spa this year.

New Tool Features

Measure now shows the parameters for both a Zipf and Pareto distribution modelling how the complexity is found in the codebase. It also shows the R-squared value for the regression through the data points. This was a particular request from the XPDay session and, with due care and attention, can be used to get some idea of how good a fit the distributions are.

Measure also now accepts a -h flag and will display some information about how to interpret the results.

New Results

Using this updated tool, I examined some more codebases (and reëxamined some from before). This table table shows the Zipf distribution paramters. It's getting to be quite difficult to find Java code these days that doens't come with tests, so some of the codebases with an N shown there are ones from SourceForge that haven't had a commit made for several years.
CodebaseSlopeInterceptR^2Automatedunit tests?
Jasml 0.10.953.520.73N
Smallsql 0.161.545.870.80Y
m-e-c scehdule α3-101.694.940.92N
Xcool 0.11.934.600.84N
MarsProject 2.792.337.90.96Y
Log4j 1.2.14 2.437.340.96Y

So, the lower slope and higher incercept of JRuby was a surprise. But note also that the R-squared is quite low. This causes me to refine my thinking about these (candidate) metrics. A low R-squared means that the linear regression through the (log-log) points describing the complexity distribution is not good. This suggests that the actual distribution then is not modelled well by a Zipf type of relation. Maybe there's something about language implementations that produces this? It's worth checking against, say, Jython.

Meanwhile, those codebases with a high R-squared (say, 0.9 or above) seem to confirm my hypothesis that (now I must say "in the case of codebases with very strongly Zipf distributions of complexity) having tests produces a steeper slope to the (log-log) linear regression throught the distribution. And it looks as if a slope of 2.0 is the breakpoint. Still lots more to do though.

Further Investigation

I'd be particularly interested from anyone who finds a codebase with an R-squared of 0.9 or more and tests, but a slope less than 2.0

The discovery of codeases whos complexity distribution is not well modelled by Zipf is not a surprise. The fact that such codebases do not match the tests => steeper slope hypothesis is very interesting and suggests strongly that the next feature ot add to measure is a chart of the actual complexity distribution to eyball.

Note to publishers of code

This sort of research would be impossible without your kind generosity in publishing your code, and I thank your for that. If your code appears here, and has been given a low intercept, this should not be interpreted as indicating that your code is of low quality, merely that it does not exibit particularly strongly the property that I'm investigating.

The Story continues here.