It's Alive! [flash of lightning, crash of thunder]

Steve Yegge, in a rather lengthy blog posting (as this one threatens to become, and with the good stuff at the end, too) mentions the characterists he believes characterise "great" software. I pretty much agree with them and here is that list, lightly glossed by me:
  1. Systems should never reboot [and so in orer to be useful]
  2. systems must be able to grow without rebooting [thus]
  3. world-class software systems always have an extension language and a plug-in system
  4. [as a bonus] Great systems also have advice, [...]hooks by which you can programmatically modify the behavior of some action or function call in the system
  5. [in order for 4 to be possible and 3 to be easy] great software systems are introspective
  6. [and] every great system has a command shell [so that you can get at the above easily and interactively, see below for the importance of this]

Wot no database?

Steve give examples of systems that exhibit various of these properties to varying degrees, but one whole class of system is missing from his list: RDBMSs. It's possible to get very much carried away with what relational datastores can do, and on the other hand, relegate them to being not much more than file managers. Either way, it's strange that Steve doesn't mention even one RDBMs in his lists of "favourite systems", since the industrial-strength RDBMs exhibit all of the properties that he ascribes to good systems (possibly for small values of "plug-in system"). Certainly they do much more so than, say, the JVM, which is on his list.

But then, when was the last time you saw someone with a terminal open doing real work by composing ad-hoc SQL queries (which is where RDBMs really shine)? Why have we, int eh IT industry, robbed our users of this?

Steve goes on to talk about rebooting as death, and muses about the concious/unconcious boundary and what it might mean to reboot a system on the concious side of it. Highly speculative, but then I did once have a conversation down the pub with someone who reported that someone else (sorry about this) was researching what it would take for self-aware AI to be benevolent towards us. An intruiging notion: those who subscribe to the idea of a creator seem always to assume that the creator is both infinitely superior to the creature and well disposed towards it (except for this one), or at worst indifferent. A self-aware, self-modifying AI might be expected to rapidly become superior to us. And if we're in the habit of killing its siblings as standard operating procedure...

But I digress. So does Steve, and I find that in amongst all the amature philosophising he kind of misses his own point. Towards the end he starts banging on static type systems in general and on Hindley-Milner in particular. This provoked the Haskell folks over on Reddit to show that Haskell can support the sort of systems he describes. Which goes to show how relatively uninteresting the kind of "life" that Yegge talks about is. Frameworks that support hot-swappable executables can be built in Haskell, of course they can. But doing so seems like a stunt.

Back to the Future

Thinking about persistance reminds me of an event: in a previous job I had for a time the title of "Head of Research" with a very open brief to investigate new ways of doing things. The business liked applications to be presented through a web browser so I built a prototype for one system that I had in mind using Seaside. One of the production developers looked at this prototype and asked me how it was persisted. It's persisted, I said, by never turning this box (kicks box hosting the prototype) off. That's not an acceptable solution for a production system, although something that simulates the same effect can be. And Ralph Johnson has this to say about that:

[clang!] just a moment... Oh, the irony: that blog is down (18 Jan 2007), the posting is here, and here's the google cache. Anyway, what Ralph says his own squeak list post that the blog post points to is:

I did this in Smalltalk long before Prevayler was invented. In fact, Smalltalk-80 has always used this pattern. Smalltalk programs are stored this way. Smalltalk programs are classes and methods, not the ASCII stored on disk. The ASCII stored on disk is several things, including a printable representation with things like comments that programmer need but the computer doesn't. But the changes file, in particular, is a log and when your image crashes, you often will replay the log to get back to the version of the image at the time your system crashed. The real data is in the image, the log is just to make sure your changes are persistent.

It's a shame that it has to be when your image crashes, not if, but never mind. This is a pointer to the big win that Yegge misses, in my opinion (and his pro-Haskell commentators, too) The interesting thing is not hot-swapping components without a restart. It's about being enabled to interact with computational objects ("object" in the broadest possible sense) in the same way you interact with physical objects: they should be ready-to-hand not present-at-hand.

A Smalltalk image is a long-lived thing (it's been claimed that some of the standard Smalltalk-80 image survives into contemporary Squeak images) but it has, along with a long lifetime, also a sense of liveness. The objects are just there, beavering away, or perhaps hanging out waiting for you to ask something of them. And they respond immediately. And Smalltalks have this sort of liveness built in by default, and assumed all the way down to a very low level. Working in such systems is a very different thing from working on or with others-and in my experience a much more productive and (the two go together) enjoyable one.

Smalltalk is not unique in this. Common Lisp implementations tend to be this way (and Scheme ones not, which is a shame). Self is this way. Subtext (not the one you're likely thinking of) is very much this way. And, to a limited extent, an Excel spreadsheet is this way. VisualAge for Java tried really hard to be this way, but this seemed to confuse the majority of that new breed of folks "Java Programmer".

Lots more systems should be this way too. Funnily enough, I believe that the best candidate around at the moment for getting this sort of immersed computation experience into the mainstream is JavaScript hosted in web browsers, if only more people would take it seriously.