In what feels to me like a voyage through a time-warp to the beginnings of my programming career in the mid–90's, Jason Gorman has
revived the old static vs dynamic typing debate.
Oh, how it all comes flooding back: "strong[sic] typing is for
weak minds", "static type systems catch the kind of bug that managers understand"
etc. etc. etc.
Jason's concerns seem to have been raised by someting to do with
this sort of thing (see the part on the
var
keyword) although he seems to muddle up a language
having a dynamic type system with it
being dynamic. These aspects are closely related, but are not quite the same (as that post explains reasonably well). It's picking nits of this variety that keeps us all in work.
Anyway, I very much agree with Jason that this resurgence of interest in dynamicish, scripty lanaguages is driven largely by fashion, and that claims made about it should be closely examined. I'm not so sure about the rest of his argument. Not least because I'm not sure that what he complains about:
Proponents of such languages cite the relative flexibility of dynamic typing compared to statically-typed languages like Java and C++. Type safety, they argue, can be achieved through unit testing.
is actually being said by anyone. Type safety through unit testing? Really? Maybe someone is, and I haven't seen it. I'd be interested to if anyone has a link.
Personally, I do tend towards dynamic languages and away form manifest static type systems. My preference away from manifest static typing is that (in the words of, IIRC Kent Back) they make me say things that I don't know are true. On the other hand I've been dabbling a bit with Haskell, which has a very strong,
very expressive static type system and offers the promise (through type inferencing) to not require all those pesky declarations. That has been both educational and fun. Unfortunately,
as Nat pointed out in another context, that lovely promise might not be delivered upon:
[in Haskell] If you don't write explicit type constraints you can end up with a type inference error somewhere in your code. Where? The only way to find out is to incrementally add explicit type constraints (using binary chop, for example) to narrow down where the error is. It's not much different, and no easier, than using printf for debugging C code.
If this is the
best case of static typing, then we have a problem.
Meanwhile, let's consider the distinction between systems programming and application programming. Try googling around the various attempts to nail down that distinction. I don't find any of them terribly satisfactory. For me, the crucial distinction is that the system programmer must allow for any possibly use of their code, whereas an application programmer does not.
This means that in systems land a function declared like f :: int -> int
must, unles very carefully specified otherwise, be known to do the right thing at every element of the whole of the cartesian product of the int
type with itself. But in application land we might, for example, know that those ints are really the numbers of the days in the week, so we only really need the function to do the right thing over {0,1,2,3,4,5,6}² Demonstrating those two different kinds of correctness require different techniques, I think.
Of course, using
int
when you mean {0,1,2,3,4,5,6} is a smell. And
here is the deodorant.
I want to join these two things up. I want to make a connection between the kind of correctness that systems code needs to have and the way that static typing is might help us with that versus the kind of correctness that application code needs to have and the way that unit testing might help us with that. But I'm not there yet. Watch this space.