XPDay London Session Submission Process

If you want to propose a session for XP Day London '08 (and I hope that you do), then here's what you need to do.

We want the conference to be largely self–organizing this year, so the submission process is suitably light weight.

Experience Reports
Please send a very brief outline of your report to submissions2008@xpday.org You will receive a URL to a google document. Make sure that you include an email address corresponding to a google account (or let us know if the address you send from is the one to use). The document will be writeable by you and the committee and all other submitters. Develop your report there. As in past years we invite the community to collaborate together to help maximize the quality of all submissions. After the close of submissions the committee will make a selection of experience reports to be presented at the conference. The selected reports will be shepherded between acceptance and the conference.

Conference Sessions
Most of the duration of the conference (other than experience reports) will be OpenSpace. There will be Lightning Talks early in the day where people can publicize topics that they would like to discuss. We encourage people to engage in this way. There will be optional rehearsals at XtC in the weeks running up to the conference. You do not need to attend these to make a lightening talk at the conference. If you would like to have such a rehearsal, send a title and a google account email address to submissions2008@xpday.org You will be allocated to a rehearsal slot. We invite groups outside London to schedule rehearsals too, and are happy to help with scheduling those. There will be a google calendar with the slots shown.

Some kinds of session can benefit from some preparation of materials and some logistics on the day. For example, a workshop involving some sort of server, or extensive materials. There will be a limited number of time slots during the conference for such sessions. Please submit a brief outline to submissions2008@xpday.org indicting an email address associated with a google account. You will be sent the URL of a google document within which to develop your proposal in collaboration with other submitters. After the close of submissions these proposals will be assessed by the committee and suitable sessions will be selected for the program.

We have decided to extend the submission period, so submissions for experience reports, rehearsals and programmed sessions now closes on Friday August 8th 2008

This year we want to de–emphasize sessions introducing Agile or Scrum or XP or TDD or... and promote topics of current interest to practitioners. We want the conference to be a forum in which the state of the art is advanced. That doesn't mean that only experts are welcome, or welcome to present. Experts have their failure modes, simple questions from journeymen often reveal the essence. 

All are welcome, so get your thinking caps on!

TDD, Mocks and Design

Mike Feathers has posted an exploration of some ideas about and misconceptions of TDD. I wish that more people were familiar this story that he mentions:
John Nolan, the CTO of a startup named Connextra [...] gave his developers a challenge: write OO code with no getters. Whenever possible, tell another object to do something rather than ask. In the process of doing this, they noticed that their code became supple and easy to change.
That's right: no getters. Well, Steve Freeman was amongst those developers and the rest is history. Tim Mackinnon tells another part of the story. I think that there's actually a little bit missing from Michael's decription. I'll get to it at the end.


A World Without Getters

Suppose that we want to print a value that some object can provide. Rather than writing something like statement.append(account.getTransactions()) instead we would write something more like account.appendTransactionsTo(statement) We can test this easily by passing in a mocked statement that expects to have a call like append(transaction) made. Code written this way does turn out to be more flexible, easier to maintain and also, I submit, easier to read and understand. (Partly because) This style lends itself well to the use of Intention Revealing Names.

This is the real essence of TDD with Mocks. It happens to be true that we can use mocks to stub out databases or web services or what all else, but we shouldn't. Not doing that leads us to write code for each sub-domain within our application in terms of very narrow, very specific interfaces with other sub-domains and to write transducers that sit at the boundaries of those domains. This is a good thing. At the largest scale, with functional tests, it leads to hexagonal architecture. And that can apply equally well recursively down to the level of individual objects.

The next time someone tries to tell you that an application has a top and a bottom and a one-dimensional stack of layers in between like pancakes, try exploring with them the idea that what systems really have is an inside and a outside and a nest of layers like an onion. It works remarkable wonders.

If we've decided that we don't mock infrastructure, and we have these transducers at domain boundaries, then we write the tests in terms of the problem domain and get a good OO design. Nice.


The World We Actually Live In

Let's suppose that we work in a mainstream IT shop, doing in-house development. Chances are that someone will have decided (without thinking too hard about it) that the world of facts that our system works with will live in a relational database. It also means that someone (else) will have decided that there will be a object-relational mapping layer, based on the inference that since we are working in Java(C#) which is deemed by Sun(Microsoft) to be an object-oriented language then we are doing object-oriented programming. As we shall see, this inference is a little shaky.

Well, a popular approach to this is to introduce a Data Access Object as a facade onto wherever the data actually lives. The full-blown DAO pattern is a hefty old thing, but note the "transfer object" which the data source (inside the DAO) uses to pass values to and receive values from the business object that's using the DAO. These things are basically structs, their job is to carry a set of named values. And if the data source is hooked up to an RDBMS then they more-or-less represent a row in a table. And note that the business object is different from the transfer object. The write-up that I've linked to is pretty generic, but the inference seems to be invited that the business object is a big old thing with lots of logic inside it.

A lot of the mechanics of this are rolled up into nice frameworks and tools such as Hibernate. Now, don't get me wrong in what follows: Hibernate is great stuff. I do struggle a bit with how it tends to be used, though. Hibernate shunts data in and out of your system using transfer objects, which are (lets say) Java Beans festooned with getters and setters. That's fine. The trouble begins with the business objects.


Irresponsible and Out of Control

In this world another popular approach is, whether it's named as such or not, whether it's explicitly recognized or not, robustness analysis. A design found by robustness analysis (as I've seen it in the wild, which may well not be be what's intended, see comments on ICONIX) is built out of "controllers", big old lumps of logic, and "entities", bags of named values. (And a few other bits and bobs) Can you see where this is going? There are rules for robustness analysis and one of them is that entities are not allowed to interact directly, but a controller may have many entities that it uses together.

Can you imagine what the code inside the update method on the GenerateStatementController (along with its Statement and Account entities) might look like?
Hmmm.


Classy Behaviour

Whenever I've taught robustness analysis I've always contrasted it with Class Responsibility Collaboration, a superficially similar technique that produces radically different results. The lesson has been that RA-style controllers always, but always, hide valuable domain concepts.

It's seductively easy to bash in a controller for a use case and then bolt on a few passive entities that it can use without really considering the essence of the domain. What you end up with is the moral equivalent of stored procedures and tables. That's not necessarily wrong, and it's not even necessarily bad depending on the circumstances. But it is completely missing the point of the last thirty-odd years worth of advances in development technique. One almost might as well be building the system in PRO*C

Anyway, with CRC all of the objects we find are assumed to have the capability of knowing things and doing stuff. In RA we assume that objects either know stuff or do stuff. And how's a know-nothing stuff-doer get the information to carry out its work? Why, it uses a passive knower, an entity which (ta-daaah!) pops ready made out of a DAO in the form of a transfer object.

And actually that is bad.


Old Skool

Back in the day the masters of structured programming[pdf] worried a lot about various coupling modes that can occur between two components in a system. One of these is "Stamp Coupling". We are invited to think of the "stamp" or template from which instances of a struct are created. Stamp coupling is considered (in the structured design world) one of the least bad kinds of coupling. Some coupling is inevitable, or else your system won't work, so one would like to choose the least bad ones, and (as of 1997) stamp coupling was a recommended choice.

OK, so the thing about stamp coupling is that it implicitly couples together all the client modules of a struct. If one of them changes in a way that requires the shape of the struct to change then all the clients are impacted, even if they don't use the changed or new or deleted field. That actually doesn't sound so great, but if you're bashing out PL/1 it's probably about the best you can do. Stamp coupling is second best, with only "data" coupling as preferable: the direct passing of atomic values as arguments. Atomic data, eh? We'll come back to that.

However, the second worst kind of coupling that the gurus identified was "common coupling" What that originally meant was something like a COMMON block in Fortran, or global variables in C, or pretty much everything in a COBOL program: just a pile of values that all modules/processes/what have you can go an monkey with. Oops! Isn't that what a transfer object that comes straight out of a (single, system-wide) database ends up being? This is not looking so good now.

What about those atomic data values? What was meant back in the day was what we would now call native types: int, char, that sort of thing. The point being that these are safe because it's profoundly unlikely that some other application programmer is going to kybosh your programming effort by changing the layout of int.
And the trouble with structs is that they can. And the trouble with transfer objects covered in getters and setters is that they can, too. But what if there were none...


Putting Your Head in a Bag Doesn't Make you Hidden

David Parnas helped us all out a lot when in 1972 he made some comments[pdf] on the criteria to be used in decomposing systems into modules
Every module [...] is characterized by its knowledge of a design decision which it hides from all others. Its interface or definition was chosen to reveal as little as possible about its inner workings.
Unfortunately, this design principle of information hiding has become fatally confused with the implementation technique of encapsulation.

If the design of a class involves a member private int count then encapsulating that behind a getter public int getCount() hides nothing. When (not if) count gets renamed, changed to a big integer class, or whatever, all the client classes need to know about it.

I hope you can see that if we didn't have any getters on our objects then this whole story unwinds and a nasty set of design problems evaporate before our eyes.


What was the point of all that, again?

John's simple sounding request: write code with no getters (and thoroughly test it, quite important that bit) is a miraculously clever one. He is a clever guy, but even so that's good.

Eliminating getters leads developers down a route that we have known is beneficial for thirty years, without really paying much attention. And the idea has become embedded in the first really new development technique to come along for a long time: TDD. What we need to do now is make sure that mocking doesn't get blurred in the same was as a lot of these other ideas have been.

First step: stop talking about mocking out infrastructure, start talking about mocks as a design tool.

Image

Rick Minerich asks the question "Why are our programs still represented by flat files?" To which my answer is "not all of them are." 

Judging by the tag cloud on his blog, Rick works mainly in the Microsoft end of the enterprisey world. If VisualStudio is the main front end to your system then I can see why the question would arise. Of course, in other parts of the IT world programs stopped being in flat files a long time ago. Or even never were. But you have to go looking for that world. As luck would have it, Gilad Bracha has just recently explained some of the very pleasant aspects of that world. But so many programmers either don't know it exists, or once shown it refuse to enter.  

A Painful Learning Experience Revisited

Painful but valuable. This old thing has popped up onto folks' radars again, with some interesting discussion here

I'm sad that the PoMoPro experiment kind-of fizzled out. I occasionally try to talk Ivan into doing a second conference, but without success.

XP Day London Call

I'm pleased to announce that the call for submissions to XpDay London 2008 is now open. 

Get your thinking caps on, and in due course an electronic submission system will be announced.

The Process for Programming (and why it might not help)

Paul Johnson writes about the infamous 1:10 productivity ratios in programing, and suggests that we don't know how to program, that is, there is no process for programming. 

I take the view that on the contrary, we do know how to program, and there pretty much is a process for programming, but it isn't much recognized because it doesn't do what so many people so desperately want a process for programming to do.

The process for programming that I have in mind is very much like the process for doing maths—I suspect that this is why (reputedly) everyone who goes to work as a programmer at Microsoft is given a copy of How to Solve It. What both activities have in common is a need to do imaginative, creative problem solving within strict constraints.

Paul lists a bunch of activities, of which he suggests that "the core activity is well understood: it is written down, taught and learned". One of these is "curing a disease".

I find that questionable. Have you ever watched House? Apart from being a very entertaining update of Sherlock Holmes, what goes on in the early seasons of House is (within the limits of television drama, to the eyes of this layman) pretty close to the actual investigations and diagnoses I have read in The Medical Detectives,  The Man Who Mistook His Wife for a Hat and a few other of the bits of source material. To a surprising extent, they aren't making that stuff up. See how the diagnostic team blunder about? See how they follow false leads? See how some apparently unrelated fact combines in the prepared mind with a huge mass of prior experience to prompt new ideas? See how they must take a chance on something that they don't know will work?

There are definite tools that are used again and again, primarily differential diagnosis (which, BTW, makes the average episode of House into a good short course on debugging), and the various physiological tests and so forth, but mostly it's a synthesis of knowledge and experience. If you ever get the chance to hear Brian Marick tell his story about how vets learn to recognize "bright" vs "dull" cows you'll get a good idea of just how the teaching of medicine works. And it works to a large extent by the inculcation of tacit knowledge.

Tacit knowledge is terribly hard stuff to deal with, it's in the "unknown knowns" quadrant. One of the things that makes teaching a skill difficult is the manifestation of tacit knowledge*.  A bit part of what a lot of people want a "process" for programming to do is to make explicit exactly that tacit knowledge. 

But there is only one way to do that really well, which is exactly why things like teaching hospitals exist. The way to do it is to try repeatedly with feedback from an expert. The external feedback is crucial. You may have heard somewhere that practice makes perfect. In fact, all practice does by itself is make consistent. It's the expert feedback that leads to improvement. 

This one reason why folks are unhappy with what we do know about how to program: it cannot be learned quickly from a book, from a one-week course. There is another reason, perhaps worse: what we know about programming does not (and if parallel with other disciplines based on skill and judgment, it never will) equip programmers to produce a given outcome on demand, on time, every time.

The idea has got around (originating perhaps with "One Best Way" Taylor) that a process for any given job should be like a recipe—follow it exactly and get the same result every time. In some spheres of work such things are possible. Programming does not seem to be one of them, no matter how much we might wish it.

* Design (and other kinds) of patterns are in part about capturing this stuff, which they do variously well. I suspect that this is part of the reason why more experienced folks who haven't seen them before say things like "that's obvious" or "anyone who doesn't know/couldn't work that out shouldn't be a programmer". 

XpDay 2008

XpDay London will be on 11 and 12 of December, at Church House in the City of Westminster. The call for submissions will come soon. 

We're aiming for something a little different this year. The conference committee (in as far as it has one) has decided that if we belong to a community that really does value individuals and their interactions more than processes and tools, and responding of change more than following a plan, then our conference should work in a way consonant with that. I'm Programme Co-chair, so a lot of the mechanics of how that will work in practice fall to me to figure out–and this is almost done. I'll keep you posted.

Why You Aren't Donald Knuth

Nice interview over at InformIT (it's that Binstock guy again, why had I not heard of him a couple of weeks ago?) with Prof Knuth.

He makes this observation:
[...]]the idea of immediate compilation and "unit tests" appeals to me only rarely, when I’m feeling my way in a totally unknown environment and need feedback about what works and what doesn’t. Otherwise, lots of time is wasted on activities that I simply never need to perform or even think about. Nothing needs to be "mocked up."
I believe him. I also draw a few inferences. 

One is that Knuth probably doesn't spend much time working on systems that do their work by intimately coördinating (with) six or eight other systems with very different usage patterns, metaphors, implementation stacks, space and time complexities, latencies, interface styles/languages/protocols etc. etc. etc. 

Another is that he probably doesn't spend much time working on problems that are incompletely and ambiguously described (and that can't be fixed in a meaningful time and/or budget), part of an ad–hoc domain, intrinsically fuzzy, aiming or a moving target from a moving platform, subject to capricious and arbitrary constraints, etc. etc. etc.

And thus is the working life of the research computer scientist different from the working life of the commercial software engineer. I'm tempted to say (and so I will) that the jobbing industrial programmer spends more time in an unknown environment and needs more feedback (and, right now!) about what works and what doesn’t more often than a researcher does.

Postscript

By the way, anyone* who reads that interview and thinks to themselves "ha! See! Knuth doesn't unit test, so I don't need to either" needs to consider a couple of things:
  1. you aren't Don Knuth 
  2. are you really prepared to do all the other stuff that Knuth does so that his programs still work without unit testing?
  3. if so, whom do you imagine is going to pay for you to?
The "I'm too clever to test" crowd does exist. I'd be amazed if any of them read this blog, but those of you who do will likely meet them from time to time. An electron-microscopically tiny proportion of them are right (see consideration 1).

Creation Under Constraints

It's a symptom of the bourgeoisie's self-hatred that creation (or worse yet, creativity) is seen as a wild, undisciplined, anarchic thing. And yet every person who'd be called an artist that I've ever met has been passionately interested in form and structure and rules and schemata and in doing work under constraints that force them to try new things. That's right, introducing constraints can increase creativity, can bring to light new and exiting possibilities that were, so to speak, hidden by the freedom.

Haiku. Silverpoint drawing. Strict counterpoint. You wouldn't want to not have any other way to express yourself, but what a learning exercise!

Andrew Binstock relates a certain set of constraints within which to practice OO programming. 

One of them I like a huge amount:
Wrap all primitives and strings. [...] So zip codes are an object not an integer, for example. This makes for far clearer and more testable code.
It certainly does, which was the topic of the session which Ivan and I ran at Spa this year.

Another is this:
Don’t use setters, getters, or properties. [...] “tell, don’t ask.”
Indeed. What we now know as jMock was invented to support this style of programming. It works a treat.

This one, though, I'm a bit equivocal about:
Don’t use the ‘else’ keyword. Test for a condition with an if-statement and exit the routine if it’s not met.
See, depends on what's meant. It could be that a trick is being missed. On the one hand, I'm a huge fan of code that returns early (if(roundHole.depthOf(squarePeg).sufficient()) return;), and also of very aggressive guard clauses (if(i.cantLetYouDoThat(dave)) throw new DeadAstronaut();) On the other, I believe that the constraint that does most to force you to "get" OO is this:
Don't use any explicit conditionals. Never mind else, use no if.
Can you see how that would work? I learned this so long ago that I can't remember where from (by some route from "Big" Dave Thomas, perhaps?) 

Oh yes, and for bonus points, Andrew mentions this: "Don’t use any classes with more than two instance variables". I suggest that you try not using any classes that have any instance variables, and see how far you get. You might find that to make progress you need a better language.

Pedanto-linguistics

So, there's a couple of systems that I've been using fairly intensively in the last couple of months, both concerned with the programme for Agile 2008 (which is looking pretty good, BTW). 

One of them complains if one tries to browse certain URLs without having first logged in. In that case it tells me "You are not authorized to access this page". Which irks me every time I see it, because of course I am authorized to access that page, I just haven't yet presented the credentials that tell the system so. Which is what the system should invite me to do, rather than telling me to get hence.

The other system allows collaborative editing of documents and shows who last edited each file. When I am the person who last edited the file it says that the editor was "me". Which should, of course, be "you".

This sort of thing is a small, but I think under-appreciated, source of the discomfort that many people feel when using IT systems. We (I write as an implementor of systems) should be more careful in this regard.  

Why bzr?

I've long been a (very contented) P4 user, both at work and at home. It's less than great to be emailing patches around, though. As I've been doing more work on measure I've found that I want more and more to do version controlled work on the same code here, there and everywhere. So, the new burst of interest in DVCS's has come along at just the right time.

I've chosen Bazaar as my first DVCS. Ed asked me why, and the answer turned out to be quite interesting (to me).

Firstly, why not git? Well, I am in general not a member of the linux world mainly because computers are for me (these days, anyway) a means not an end. I no longer enjoy installing this and (re)configuring that and (re)compiling the other just for the sheer joy of using up cycles and feeling as if I know something that others don't. Nor for squeezing out the last scintilla of performance, nor whatever it is that the 1337 do these days. I want a computer to do pretty much what I need out-of-the-box. That's why I'm a Mac user. So the idea that git provides the "plumbing" and then the "porcelain" (nice metaphor...) is built on top, well I'm bored already.

What does that leave? I head that darcs has these nasty performance cliffs that it falls off of, so no dice there. I have heard good things about Bazaar (specifically, that Nat likes it) so I'll go with that for now. So far, I really like it. Setting up exactly the distributed topology I need with the protections I want is proving to be a bit of a challenge, and the documentation seems to assume that I'm very familiar with...something...that I seem not to be (and I can't quite figure out what it is to go learn it, either), but so far so good.

I find myself a bit disappointed, though, that the FOSS crew have run quite so hard and fast with this. It's kind of a shame that people like Ed and myself can semi-legitimately get involved in a conversation about which DVCS and why. Summed over all the people like Ed and myself that's a lot of mental energy being poured down a black hole, across the industry as a whole. Especially since it seems as if there is almost nothing to choose between these tools. If only a strong industry player could have set the scene against which others could vary, but BitKeeper didn't (thinks: how much of this mess is really about sticking it to Larry McVoy "the man"?), and git hasn't, and now we have a soup of peers and all these nugatory factors to consider. What a waste.

PS: especially irksome is that I once had the pleasure of working with VisualAge for Java Micro Edition, which got all this pretty much exactly right a good long time ago.

Pols' Perfect Purchaser

Andy Pols tells a great story about just how great it can be to have your customer 1) believe in testing and 2) completely engage in your development activities. Nice.

Subtext continues to amaze

I just watched the video describing the latest incarnation of Subtext. The video shows how the liveness, direct manipulation and example–based nature of this new table–based Subtext makes the writing and testing of complex conditionals easy.

If you develop software in a way that makes heavy use of examples and tables, you owe it to yourself to know about Subtext.

A Catalogue of Weaknesses in Python

You may have read somewhere that "Patterns are signs of weakness in programming languages", and perhaps even that "16 of 23 [GoF] patterns are either invisible or simpler [in Lisp]", from which  some conclude that dynamic languages don't have/need patterns, or some such.

Interesting, then, that a recent Google dev day presentation (pdf, video) provides us with a list of terrible weaknesses great patterns in Python.

Language

Steve Freeman ponders if maybe the reason that I find statistical properties that seem to resemble those of natural languages in the code of jMock is because jMock was written to be like a language...

Some Spa 2008 Stuff

Chris Clarke has made a post here exploring the ideas that Ivan Moore and I presented at our Spa 2008 workshop Programming as if the Domain Mattered. Ivan has written up some of his learnings from the session here. I'll be doing the same myself soon. 
Chris makes this most excellent point:
I wish people would be a bit braver and use the code to express what they are trying to do and not worry about whether the way they are doing it is against Common Practice. Remember, the majority of software projects are still failures, so why follow Common Practice - it isn’t working!
Quite.

In other news, my experience report on the effects of introducing checked examples (aka automated acceptance/functional/user/whatever "tests")  gets this thorough write up from "Me" (who are you, Me?) and also a mention in this one from Pascal Van Cauwenberghe.

Thanks, folks.

Design Patterns of 1994

So, Mark Dominus' Design Patterns of 1972 has made it back to the front page of proggit.

He offers a pattern-styled write up of the idea of a "subroutine" and opines that:
Had the "Design Patterns" movement been popular in 1960, its goal would have been to train programmers to recognize situations in which the "subroutine" pattern was applicable, and to implement it habitually when necessary
which would have been disastrous. Also:
If the Design Patterns movement had been popular in the 1980's, we wouldn't even have C++ or Java; we would still be implementing Object-Oriented Classes in C with structs
Actually, I think not. Because we did have patterns in the 1990's and, guess what, programming language development did not cease. Not even in C++.

Debunking Debunking Cyclomatic Complexity

Over at SDTimes, this article by Andrew Binstock contains a claim that this result by Enerjy somehow "debunks" cyclomatic complexity as an indicator of problems in code. He suggests that what's shown is that for low complexity of methods (which is overwhelmingly the most common kind of complexity of methods) increasing complexity of methods is not (positively) correlated with the likelihood of defects. Binstock suggests that:
What Enerjy found was that routines with CCNs of 1 through 25 did not follow the expected result that greater CCN correlates to greater probability of defects.
Not so. What Enerjy say their result concerns is:
the correlation of Cyclomatic Complexity (CC) values at the file level [...] against the probability of faults being found in those files [...]). [my emphasis]
It's interesting all by itself that there's a sweet spot for the total complexity of the code in a file, which for Java pretty much means all the methods in a class. However, Binstock suggests that
[...] for most code you write, CCN does not tell you anything useful about the likelihood of your code’s quality.
Which it might not if you only think about it as a number attached to a single method, and that there are no methods of high complexity. But there are methods of high complexity—and they are likely to put your class/file into the regime where complexity is shown to correlate with the likelyhood of defects. Watch out for them.

Red and Green

Brian Di Croce sets out here to explain TDD in three index cards. It's a nice job, except that I'm not convinced by the faces on his last card.

Brian shows a sad face for the red bar, a happy face for the green, and a frankly delirious face for the refactoring step. There's something subtly wrong with this. We are told that when the bar is green the code is clean, which is great. But the rule is that we only add code to make a failing test pass, which implies a red bar. So, the red bar is our friend!

When I teach TDD I teach that green bar time is something to get away form as soon as possible. Almost all the time a green bar occurs when a development episode is incomplete: not enough tests have been written for the functionality in hand, more functionality is expected to go into the next release, or some other completeness condition is not met. 

It's hard to learn form a green bar,  but a red bar almost always teaches you something. Experienced TDDer's are very (and rightly) suspicious of a green-to-green transition. The green bar gives a false sense of security.

Generally speaking, in order to get paid we need to move an implementation forward, and that can only be done on a red bar. Wanting to get to the next red bar is a driver for exploring the functionality and the examples that will capture it. 

I tell people to welcome, to embrace, to seek out the red bar. And that when the bar is red, we're forging ahead.

TDD at QCon

Just finished a presentation of the current state of my TDD metrics thinking at QCon. The slides [pdf] are up on the conference site, video should be there soon, too.