In knowledge work, “just work” is learning. [his emphasis]If that's true then I think we have to say that a lot of contemporary development activity isn't knowledge work. Wiring up GUI controls? Transducing between HTTP requests and SQL queries? Mashing up web services? You have to learn how to do these things at all, it's true. But what do you learn while doing it once you know how? What learning takes place that is the work?
A pair team has to produce twice as much output at a given level of quality as two solo programmers in order to be worthwhile.What would we have to believe, in order to think that this claim were true? I think that we would have to believe that the paired programmers are engaged in quite a strong sense of "just work". What is it that a pair of programmers necessarily do half as much of because they are pairing? I can't think of anything but typing. Are keystrokes really what it is that programmers do that adds value? At a slightly higher level of sophistication we might say (as Corey in fact does) that the measure of value added is function points. That's a much better measure than typing (which implies code)
Measuring programming progress by lines of code is like measuring aircraft building progress by weight.—Bill Gatesbut still gives me slight pause. Function points are still a measure from the technical solution space, not the business problem space. One of the things that makes me a bit nervous about the rush to Lean is the de–emphasis on conversations with the user/customer (which Leaners have a habit of dismissing as wasteful "planning") in favour of getting tons of code written as fast as possible.
Certain (web, in particular) frameworks lead on in the direction of NOJO's. It begins with the DTO (just one kind of NOJO) and spreads from there. Is that good or bad? Can't say in the completely abstract, but if your current framework does lead you that way have you given thought to what design choices and architectural stances your are being lead to? Do you like them?
- The person in front of you will complement your pre-existing team
- The person in front of you can actually write a program
- The person in front of you is smart
Registration for XPDay is now open. XPDay sells out each year, so please book now. We operate a waiting list once the conference is full but only a handful of places subsequently become available.
The full cost of the conference is £350 but the first 50 people to book get a special “Early Bird” price of £275.
The few programmed sessions are set, and we encourage registered attendees to think about the topics they'd like to see discussed in the Open Space sessions. Attendees have the option to prepare a 5 minute "lightening talk" to promote their preferred topics, and these will go into the Open Space topic selection..
I look forward to seeing you there.
Would you please sit down here at this workstation and write some code with me?If you are reading this then you have almost certainly been through many an interview for a job that will require some programming to be done. Think back and ask yourself how many times you were offered such a job without ever having been in the same room as a computer and an interviewer.
In my own experience (I've had 9 jobs involving programming, and many more interviews) it's been quite rare. One has to wonder why that is. I suspect that having a candidate do some programming is considered very expensive—and yet hiring the wrong candidate is unbelievably expensive. Hence the questions, which seem to be aimed at finding out if someone will be able able to program without watching them do any.
Given a singly linked list, determine whether it contains a loop or notI can see what's being aimed at here, and yet it feels as if, in the stress of an interview setting, all the answer given will tell you is whether or not the candidate has seen the textbook answer before. How does knowing that help?
PS: yes we are hiring. If you'd like a go a this, drop me a line.
Collaborative optimism to solve the right problems (and only the right problems) in an incremental wayI quite like that "collaborative optimism" bit.
"Opportunities for Improvement"The pattern form used is has nine sections and I'm not sure it helps to have each section shown in the table of contents. I feel the lack of a ready–to–hand one or two page list of all the patterns by name. Twelve pages of TOC for 320 pages of content seems a bit excessive. The same syndrome shows up in the index. If all the parts of a pattern are within a four page range do they really all need their own individual index entry? This is really a shame as the book otherwise benefits from the several other complementary ways to navigate it.
the type says it allThe type says a lot, that's for sure. I says that map/reduce results in a value of type b when given a list of values of type a, a function that gives one b given two b's and a function that gives one b given a list of a's. That is a lot of information and it's impressive that it (at least) can be stated on one line. Is it "all", though?
p_map_reduce :: ([a] -> b) -> (b -> b -> b) -> [a] -> b
bump all your cores to 100% usage, and linearly increase overall performancewhich it may well do on a large enough data set, if I have fewer than seventeen cores. Currently, I do. There are nine cores in my flat right now. If my other laptop were here, it'd be eleven. But if I'm planning to use map/reduce in the first place, maybe I'm going to recruit all the machines in the office, which would currently be worth over twenty cores. This map/reduce isn't going to pin all of them.
p-map-reducea dependent type with the number of worker processes in it? I don't know enough about it to tell. And why would we care? Well, that number 16 is a very significant part of the implementation of that function.
This blog was nominated (most kind, thanks), as were those of my colleagues Ben and Daryn, but Daryn's has made it onto the shortlist. This is fitting recognition for the fine work he has put into his blog, particularly his series of Ruby/Rails tutorials which are gathering some recongnition in that world. Feel free to go and vote for him!
John Nolan, the CTO of a startup named Connextra [...] gave his developers a challenge: write OO code with no getters. Whenever possible, tell another object to do something rather than ask. In the process of doing this, they noticed that their code became supple and easy to change.That's right: no getters. Well, Steve Freeman was amongst those developers and the rest is history. Tim Mackinnon tells another part of the story. I think that there's actually a little bit missing from Michael's decription. I'll get to it at the end.
A World Without GettersSuppose that we want to print a value that some object can provide. Rather than writing something like
statement.append(account.getTransactions())instead we would write something more like
account.appendTransactionsTo(statement)We can test this easily by passing in a mocked
statementthat expects to have a call like
append(transaction)made. Code written this way does turn out to be more flexible, easier to maintain and also, I submit, easier to read and understand. (Partly because) This style lends itself well to the use of Intention Revealing Names.
The World We Actually Live InLet's suppose that we work in a mainstream IT shop, doing in-house development. Chances are that someone will have decided (without thinking too hard about it) that the world of facts that our system works with will live in a relational database. It also means that someone (else) will have decided that there will be a object-relational mapping layer, based on the inference that since we are working in Java(C#) which is deemed by Sun(Microsoft) to be an object-oriented language then we are doing object-oriented programming. As we shall see, this inference is a little shaky.
Irresponsible and Out of Control
updatemethod on the
GenerateStatementController(along with its
Accountentities) might look like?
Old SkoolBack in the day the masters of structured programming[pdf] worried a lot about various coupling modes that can occur between two components in a system. One of these is "Stamp Coupling". We are invited to think of the "stamp" or template from which instances of a struct are created. Stamp coupling is considered (in the structured design world) one of the least bad kinds of coupling. Some coupling is inevitable, or else your system won't work, so one would like to choose the least bad ones, and (as of 1997) stamp coupling was a recommended choice.
char, that sort of thing. The point being that these are safe because it's profoundly unlikely that some other application programmer is going to kybosh your programming effort by changing the layout of
Putting Your Head in a Bag Doesn't Make you HiddenDavid Parnas helped us all out a lot when in 1972 he made some comments[pdf] on the criteria to be used in decomposing systems into modules
Every module [...] is characterized by its knowledge of a design decision which it hides from all others. Its interface or definition was chosen to reveal as little as possible about its inner workings.Unfortunately, this design principle of information hiding has become fatally confused with the implementation technique of encapsulation.
private int countthen encapsulating that behind a getter
public int getCount()hides nothing. When (not if) count gets renamed, changed to a big integer class, or whatever, all the client classes need to know about it.
What was the point of all that, again?
[...]]the idea of immediate compilation and "unit tests" appeals to me only rarely, when I’m feeling my way in a totally unknown environment and need feedback about what works and what doesn’t. Otherwise, lots of time is wasted on activities that I simply never need to perform or even think about. Nothing needs to be "mocked up."I believe him. I also draw a few inferences.
PostscriptBy the way, anyone* who reads that interview and thinks to themselves "ha! See! Knuth doesn't unit test, so I don't need to either" needs to consider a couple of things:
Wrap all primitives and strings. [...] So zip codes are an object not an integer, for example. This makes for far clearer and more testable code.It certainly does, which was the topic of the session which Ivan and I ran at Spa this year.
Don’t use setters, getters, or properties. [...] “tell, don’t ask.”Indeed. What we now know as jMock was invented to support this style of programming. It works a treat.
Don’t use the ‘else’ keyword. Test for a condition with an if-statement and exit the routine if it’s not met.See, depends on what's meant. It could be that a trick is being missed. On the one hand, I'm a huge fan of code that returns early (
if(roundHole.depthOf(squarePeg).sufficient()) return;), and also of very aggressive guard clauses (
if(i.cantLetYouDoThat(dave)) throw new DeadAstronaut();) On the other, I believe that the constraint that does most to force you to "get" OO is this:
Don't use any explicit conditionals. Never mindCan you see how that would work? I learned this so long ago that I can't remember where from (by some route from "Big" Dave Thomas, perhaps?)
else, use no
I wish people would be a bit braver and use the code to express what they are trying to do and not worry about whether the way they are doing it is against Common Practice. Remember, the majority of software projects are still failures, so why follow Common Practice - it isn’t working!Quite.
Had the "Design Patterns" movement been popular in 1960, its goal would have been to train programmers to recognize situations in which the "subroutine" pattern was applicable, and to implement it habitually when necessarywhich would have been disastrous. Also:
If the Design Patterns movement had been popular in the 1980's, we wouldn't even have C++ or Java; we would still be implementing Object-Oriented Classes in C with structsActually, I think not. Because we did have patterns in the 1990's and, guess what, programming language development did not cease. Not even in C++.
What Enerjy found was that routines with CCNs of 1 through 25 did not follow the expected result that greater CCN correlates to greater probability of defects.Not so. What Enerjy say their result concerns is:
the correlation of Cyclomatic Complexity (CC) values at the file level [...] against the probability of faults being found in those files [...]). [my emphasis]It's interesting all by itself that there's a sweet spot for the total complexity of the code in a file, which for Java pretty much means all the methods in a class. However, Binstock suggests that
[...] for most code you write, CCN does not tell you anything useful about the likelihood of your code’s quality.Which it might not if you only think about it as a number attached to a single method, and that there are no methods of high complexity. But there are methods of high complexity—and they are likely to put your class/file into the regime where complexity is shown to correlate with the likelyhood of defects. Watch out for them.
Turning the TablesDavid contends that Fit's table–oriented approach affords large tests, with lots of information in each test. He's right. I like the tables because most of the automated testing gigs that I've done have involved financial trading systems and the users of those eat, drink, and breath spreadsheets. I love that I can write a fixture that will directly parse rows off a spreadsheet built by a real trader to show the sort of thing that they mean when they say that the proposed system should blah blah blah. The issue that David sees is that these rows probably contain information that is, variously: redundant, duplicated, irrelevant, obfuscatory and various other epithets. He's right, often they do.
Suck it and SeeAnd this is how gauges are used in fabrication. You don't work anything out from a gauges. What you do is apply it to see if the workpiece is within tolerance or not. And then you trim a bit off, or build a bit up, or bend it a bit more, or whatever, a re–apply the gauge. And repeat. And it doesn't really matter how complicated an object the gauge itself is (or how hard it was to make—and it's really hard to make good gauges), because it is used as if it were both atomic and a given. It's also made once, and used again and again and again and again...
This occurred to me while chatting with one of my Swiss colleagues recently. The Swiss end of the business does a lot of successful RUP engagements: that's right, successful RUP. The reason that they are successful is twofold. Firstly, they always do a development case so they always carefully pick and choose which roles, disciple and the rest they will or will not use on a project. Secondly, they understand that RUP projects really are supposed to be really iterative and really incremental, that almost all the disciplines go on to a greater or lesser extent in all phases. A (different) Swiss colleague once asked me what the difference was between Agile and RUP. My only semi-flippant answer was that if you do RUP the way Philippe Kruchten wants you to do it, then not much.
We are asked to consider those mighty figures of the past, working at institutions like the MIT AI Lab, producing "fundamental architectural components of code which everyone on earth points their CPU at a zillion times a day". OK. Let's consider them. And it's admitted that "some of [them] - perhaps even most - were, at some time in their long and productive careers, funded by various grants or other sweetheart deals with the State." No kidding.
Go take a look at, for example, the Lambda the Ultimate papers, a fine product of MIT. Now, MIT is a wealthy, independent, private university. So who paid for Steele and the rest to gift the world with the profound work of art that is Scheme? AI Memo 349 reports that the work was funded in part by an Office of Naval Research contract. The US Navy wanted "An Interpreter for Extended Lambda Calculus"? Not exactly. In 1975 "AI" meant something rather different and grander than it does today. And it was largely a government-funded exercise. This Google talk gives a compelling sketch of the way that The Valley is directly a product of the military-industrial complex, that is interventionist government funding. Still today, the military are a huge funder of research, and buyer of software development effort and hardware to run the results upon, which (rather indirectly) pushes vast wodges of public cash into the hands of technology firms. Or even directly: Bell Labs, for instance, received direct public funding in the form of US government contracts.
In the UK, at least, the government agencies that pay for academic research (of all kinds) are beginning to wonder, in quite a serious, budget-slashing kind of way, if they're getting value for money. So, naturally, the research community is doing some research to find out. One reason that this is of interest to me is that my boss (or, my boss's boss anyway) did some of this research. Sorry that those papers are behind a pay gate. I happen to be a member of the ACM, so I've read this one and one of the things it says is that as of 2000 the leading SCM tools generated revenues of nearly a billion dollars for their vendors. And where did the ideas in those valuable products come from? They came, largely, from research projects. What the Impact Project is seeking to do is to identify how ideas from research feed forward into industrial practice, and they are doing this by tracing features of current systems back to their sources. Let's take SCM.
The observation is made that the diff [postscript] algorithm (upon which, well, diffing and merging rely) is a product of the research community. From 1976. With subsequent significant advances made (and published in research papers) in 1984, '91, '95 and '96. Other research ideas (such as using SCMs to enforce development processes) didn't make a significant impact in industry.
Part of the goal of Impact is to:
educate the software engineering research community, the software practitioner community, other scientific communities, and both private and public funding sources about the return on investment from past software engineering research [and] project key future opportunities and directions for software engineering research and practice [and so] help to identify the research modalities that were relatively more successful than others.In other words, find out how to do more of the stuff that's more likely to turn out to be valuable. The bad news is that it seems to be hard to tell what those are going to be.
I focus a little on the SCM research because that original blog post that got me going claims that
most creative programming these days comes from free-software programmers working in their spare time. For example, the most interesting new software projects I know of are revision control systems, such as darcs, Monotone, Mercurial, etc. As far as Washington knows, this problem doesn't even exist. And yet the field has developed wonderfully.I would be very astonished to find that a contemporary (I write in early 2008–yes, 2008 and I still don't have a flying car!) update of that SCM study would conclude that the distributed version control systems were invented out of thin air in entirely independent acts of creation. They do mention that the SCM vendors, when asked, tended to claim this of their products).
The creation of Mercurial was a response to the commercialization of BitKeeper, and BitKeeper would seem to have been inspired by/based on TeamWare. Those seem to have all been development efforts hosted inside corporations, which is cool. I'd be interested to learn that McVoy at no time read any papers or met any researchers that talked about anything that resembled some sort of version control that was kind-of spread around the place. The Mercurial and and bazaar sites both cite this fascinating paper [pdf] which cites this posting. Which tells us that McVoy's approach to DVCS grew out work at Sun (TeamWare) done to keep multiple SCCS repositories in sync. Something that surely more people that McVoy wanted to do. SCCS was developed at Bell Labs (and written up in this paper [pdf], in IEEE Transactions in 1975)
One of the learnings from Impact is that what look like novel ideas from a distance in general turn out, upon closer inspection, to have emerged from a general cloud of research ideas that were knocking around at the time. The techniques used in the Impact studies have developed, and this phenomenon is much more clearly captured in the later papers. So what does that tell us?
Well, it tells us that its terribly hard to know where ideas came from, once you have them. And that makes it terribly hard to guess well what ideas are going to grow out of whatever's going on now. So perhaps there isn't a better way than to generate lots of solutions, throw them around the place and see waht few of them stick to a problem. Which is going to be expensive and inefficient–upsetting for free-marketeers, but then perhaps research should be about what we need, not merely what we want (which the market can provide just fine, however disastrous that often turns out to be). Anyway, once I heard somewhere that you can't always get what you want.
Back to that original, provocative, blog posting. It's claimed that, as far as the problems that the recent crop of DVCS systems address that "As far as Washington knows, this problem doesn't even exist." Applying a couple of levels of synecdoche and treating "Washington" as "the global apparatus of directly and indirectly publicly-funded research", the it would perhaps be better to say that "Washington thinks that it already payed for this problem to be solved decades ago". Washington might be mistaken to think that, but it's a rather different message.
Such a machine would already be as thin and as light as I care about, and as capable as I want. But I can't have one, apparently. I could have this new "air" frippery–and doesn't it photograph well? Don't you love they way that in that 3/4 overhead shot you can't see the bulge underneath and it looks even thinner that it actually is? But really, a laptop with a off-board optical drive? Well, maybe that's the future, what with renting movies on iTunes and what have you, but...
Apple (ie, Jonathan Ive) have gotten really good at
Recently this little gem resurfaced on reddit, which prompted a certain line of investigation. Normally I spread a lot of links around my posts[*] partly as aides memoir for myself, partly because my preferred influencing style is association, and partly because I believe that links are content. But I really want you to go and look at this cartoon, right now. Off you go.
Back again? Good. This "learning curve" metaphor is an interesting one. Before getting into this IT lark I was in training to be a physicist so curves on charts have are strongly suggestive to me. I want them to represent a relationship between quantities that reveals something about an underlying process. I want the derivatives at points and areas under segments to mean something. What might these meanings be in the case of the learning curve?
Most every-day uses of the phrase "learning curve" appeal to a notion of how hard it is to acquire some knowledge or skill. We speak of something difficult having a "steep" learning curve, or a "high" learning curve, or a "long" one. We find the cartoon is funny (to the extent that we do—and you might find that it's in the only-funny-once category) because our experience of learning vi was indeed a bit like running into a brick wall. Learning emacs did indeed feel a little bit like going round in circles, it did indeed seem as if learning Visual Studio was relatively easy but ultimately fruitless.
But where did this idea of a learning curve come from? A little bit of digging reveals that there's only one (family of) learning curve(s), and what it/they represents is the relationship between how well one can perform a task vs how much practice once has had. It is a concept derived from the worlds of the military and manufacturing, so "how well" has a quite specific meaning: it means how consistently. And how consistently we can perform an action is only of interest if we have to perform the action many, many times. Which is what people who work in manufacturing (and, at the time that the original studies were done, the military) do.
And it turns out, in pretty much all the cases that anyone has looked at, that the range of improvement that is possible is huge (thus all learning curves are high), and that the vast majority of the improvement comes from the early minority of repetitions (thus all learning curves are steep). Even at very high repetition counts, tens or hundreds of thousands, further repetitions can produce a marginal improvement in consistency (thus all learning curves are long). This is of great interest to people who plan manufacturing, or other, similar, operations because they can then do a little bit of experimentation to see how many repetitions a worker needs to do to obtain a couple of given levels of consistency. They can then fit a power-law curve trough that data and predict how many repetitions will be needed to obtain another, higher, required level of consistency.
Actual learning curves seem usually to be represented as showing some measure of error, or variation, starting at some high value and then dropping, very quickly at first, as the number of repetitions increases.
Which is great is large numbers of uniform repetitions is how you add value.
But, if we as programmers believe in automating all repetition, what then for the learning curve?
[*] Note: for those of you who don't like this trait, recall that you don't have to follow the links.
Commentators are making statements like this:
In certain programming cultures, people consider Design Patterns to be a core set of practices that must be used to build software. It isn’t a case of when you need to solve a problem best addressed by a design pattern, then use the pattern and refer to it by name. It’s a case of always use design patterns. If your solution doesn’t naturally conform to patterns, refactor it to patterns.What Reg originally said was "Most people consider Design Patterns to be a core set of practices that must be used to build software." Most? Really most? I challenged this remarkable claim, and Reg subsequently re-worded it as you see above. Quite who's in this certain culture I don't know. And apparently neither does Reg:
We could argue whether this applies to most, or most Java, or most BigCo, or most overall but not most who are Agile, or most who have heard the phrase "Design Patterns" but have never read the book, or most who have read the book but cannot name any pattern not included in the book, or...But if that "certain culture" doesn't exist, then what's the problem? And if it does exist, then why is it so hard to pin down?
But it seems off topic. So I have modified the phrase to be neutral.
There are anecdotes:
When I was [...] TA for an intro programming class [...] I was asked to whip up a kind of web browser "shell" in Java. [...] Now, the first language that I learned was Smalltalk, which has historically been relatively design-pattern-free due to blocks and whatnot, and I had only learned Java as an afterthought while in college, so I coded in a slightly unorthodox way, making use of anonymous inner classes (i.e., shitty lambdas), reflection, and the like. I ended up with an extremely loosely coupled design that was extremely easy to extend; just unorthodox.So, maybe there are some teachers of programming (an emphatically different breed from industry practitioners) who over-emphasize patterns. That I can believe. And it even makes sense, because patterns are a tool for knowledge management, and academics are in the business of knowing stuff.
When I gave it to the prof, his first reaction, on reading the code, was...utter bafflement. He and another TA actually went over what I wrote in an attempt to find and catalog the GoF patterns that I'd used when coding the application. Their conclusion after a fairly thorough review was that my main pattern was, "Code well."
It's where Patterns came fromBut what's this? "Smalltalk [...] has historically been relatively design-pattern-free due to blocks and whatnot" That's simply not true. In fact, the idea of using Alexander's pattern concept to capture recurrent software design ideas comes from a group that has a large intersection with the Smalltalk community. There's a Smalltalk-originated pattern called MVC, you may have heard of it being applied here and there. Smalltalk is pattern-free? Where would such confusion come from? I think that the mention of blocks is the key.
That Smalltalk has blocks (and the careful choice of the objects in the image) gives it a certain property, that application programmers can create their own control-flow structures. Smalltalk doesn't have
whileand what have you baked in. Lisps have this same property (although they do tend to have
condand similar baked in) through the mechanism of macros.
Now, this is meant to mean that patterns disappear in such languages because design patterns are understood to be missing language features, and in Lisp and Smalltalk you can add those features in. So there shouldn't be any missing. The ur-text for this sort of thinking is Norvig's Design Patterns in Dynamic Programming.
The trouble is that he only shows that 16 of the 23 GoF patterns are "invisible or simpler" in Lisp because of various (default) Lisp language features. This has somehow been inflated in people's minds to be the equivalent of "there are no patterns in Lisp programs", where "Lisp" is understood these days as a placeholder for "my favourite dynamic language that's not nearly as flexible as Lisp".
If there are no such things as patterns in the Lisp world, then there are no statements a master programmer can give as advice for junior programmers, and hence there is no difference at all between an beginning programmer and a master programmer in the Lisp world.Because that's what patterns are about, they are stories about recurrent judgments, as made by the masters in some domain.
Just in passing, here's a little gem from Lisp. Macros are great, but Common Lisp macros have this property called variable capture, which can cause problems. There are a bunch of things you can do to work around this, On Lisp gives 5. Wait, what? Surely this isn't a bunch of recurrent solutions to a problem in a context? No, that couldn't be. As it happens, it is possible to make it so that macros don't have this problem at all. What you get then is Scheme's hygienic macros, which the Common Lisp community doesn't seem too keen to adopt. Instead, they define variable capture to be a feature.
Now, it's tempting to take these observations about the Lisp world and conclude that the whole "no patterns (are bunk because) in my language" thread is arrant nonsense and proceed on our way. But I think that this would be to make the mistake that the anti-patterns folk are making, which is to treat patterns as a primarily technical and not social phenomenon.
Actually, a little bit too social for my taste. I've been to a EuroPLoP and work-shopped (organizational) patterns of my own co-discovery, and it'll be a good while before I go to another one–rather too much of this sort of thing for my liking. But that's my problem, it seems to work for the regulars and more power to them.
ContemptLook again at Reg's non-definition of the "certain programming cultures":
[...] most, or most Java, or most BigCo, or most overall but not most who are Agile, or most who have heard the phrase "Design Patterns" but have never read the book, or most who have read the book but cannot name any pattern not included in the book,[...]doesn't it seem a little bit as if these pattern-deluded folks are mainly "programmers I don't respect"?
This is made pretty explicit in another high-profile pattern basher's writing
The other seminal industry book in software design was Design Patterns, which left a mark the width of a two-by-four on the faces of every programmer in the world, assuming the world contains only Java and C++ programmers, which they often do. [...] The only people who routinely get excited about Design Patterns are programmers, and only programmers who use certain languages. Perl programmers were, by and large, not very impressed with Design Patterns. However, Java programmers misattributed this; they concluded that Perl programmers must be slovenly [characters in a strained metaphor].Just in case you're in any doubt, he adds
I'll bet that by now you're just as glad as I am that we're not talking to Java programmers right now! Now that I've demonstrated one way (of many) in which they're utterly irrational, it should be pretty clear that their response isn't likely to be a rational one.No, of course not.
A guy who works for Google once asked me why I thought it should be that, of all the Googlers who blog, Steve has the highest profile. It's probably got something to do with him being the only one who regularly calls a large segment of the industry idiots to their face.
And now don't we get down to it? If you use patterns in your work, then you must be an idiot. Not only are you working on one of those weak languages, but you don't even know enough to know that GoF is only a bunch of work-arounds for C++'s failings, right? I think Dick Gabriel's comment goes to the heart of it. If patterns are the master programmers advice to the beginning programmer, codified, then to use patterns is to admit to being a beginner. And who'd want to do that?