John Nolan, the CTO of a startup named Connextra [...] gave his developers a challenge: write OO code with no getters. Whenever possible, tell another object to do something rather than ask. In the process of doing this, they noticed that their code became supple and easy to change.That's right: no getters. Well, Steve Freeman was amongst those developers and the rest is history. Tim Mackinnon tells another part of the story. I think that there's actually a little bit missing from Michael's decription. I'll get to it at the end.
A World Without Getters
Suppose that we want to print a value that some object can provide. Rather than writing something likestatement.append(account.getTransactions())
instead we would write something more like account.appendTransactionsTo(statement)
We can test this easily by passing in a mocked statement
that expects to have a call like append(transaction)
made. Code written this way does turn out to be more flexible, easier to maintain and also, I submit, easier to read and understand. (Partly because) This style lends itself well to the use of Intention Revealing Names.This is the real essence of TDD with Mocks. It happens to be true that we can use mocks to stub out databases or web services or what all else, but we shouldn't. Not doing that leads us to write code for each sub-domain within our application in terms of very narrow, very specific interfaces with other sub-domains and to write transducers that sit at the boundaries of those domains. This is a good thing. At the largest scale, with functional tests, it leads to hexagonal architecture. And that can apply equally well recursively down to the level of individual objects.
The next time someone tries to tell you that an application has a top and a bottom and a one-dimensional stack of layers in between like pancakes, try exploring with them the idea that what systems really have is an inside and a outside and a nest of layers like an onion. It works remarkable wonders.
If we've decided that we don't mock infrastructure, and we have these transducers at domain boundaries, then we write the tests in terms of the problem domain and get a good OO design. Nice.
The World We Actually Live In
Let's suppose that we work in a mainstream IT shop, doing in-house development. Chances are that someone will have decided (without thinking too hard about it) that the world of facts that our system works with will live in a relational database. It also means that someone (else) will have decided that there will be a object-relational mapping layer, based on the inference that since we are working in Java(C#) which is deemed by Sun(Microsoft) to be an object-oriented language then we are doing object-oriented programming. As we shall see, this inference is a little shaky.Well, a popular approach to this is to introduce a Data Access Object as a facade onto wherever the data actually lives. The full-blown DAO pattern is a hefty old thing, but note the "transfer object" which the data source (inside the DAO) uses to pass values to and receive values from the business object that's using the DAO. These things are basically structs, their job is to carry a set of named values. And if the data source is hooked up to an RDBMS then they more-or-less represent a row in a table. And note that the business object is different from the transfer object. The write-up that I've linked to is pretty generic, but the inference seems to be invited that the business object is a big old thing with lots of logic inside it.
A lot of the mechanics of this are rolled up into nice frameworks and tools such as Hibernate. Now, don't get me wrong in what follows: Hibernate is great stuff. I do struggle a bit with how it tends to be used, though. Hibernate shunts data in and out of your system using transfer objects, which are (lets say) Java Beans festooned with getters and setters. That's fine. The trouble begins with the business objects.
Irresponsible and Out of Control
In this world another popular approach is, whether it's named as such or not, whether it's explicitly recognized or not, robustness analysis. A design found by robustness analysis (as I've seen it in the wild, which may well not be be what's intended, see comments on ICONIX) is built out of "controllers", big old lumps of logic, and "entities", bags of named values. (And a few other bits and bobs) Can you see where this is going? There are rules for robustness analysis and one of them is that entities are not allowed to interact directly, but a controller may have many entities that it uses together.
Can you imagine what the code inside the
update
method on the GenerateStatementController
(along with its Statement
and Account
entities) might look like?Hmmm.
Classy Behaviour
Whenever I've taught robustness analysis I've always contrasted it with Class Responsibility Collaboration, a superficially similar technique that produces radically different results. The lesson has been that RA-style controllers always, but always, hide valuable domain concepts.
It's seductively easy to bash in a controller for a use case and then bolt on a few passive entities that it can use without really considering the essence of the domain. What you end up with is the moral equivalent of stored procedures and tables. That's not necessarily wrong, and it's not even necessarily bad depending on the circumstances. But it is completely missing the point of the last thirty-odd years worth of advances in development technique. One almost might as well be building the system in PRO*C
Anyway, with CRC all of the objects we find are assumed to have the capability of knowing things and doing stuff. In RA we assume that objects either know stuff or do stuff. And how's a know-nothing stuff-doer get the information to carry out its work? Why, it uses a passive knower, an entity which (ta-daaah!) pops ready made out of a DAO in the form of a transfer object.
And actually that is bad.
Old Skool
Back in the day the masters of structured programming[pdf] worried a lot about various coupling modes that can occur between two components in a system. One of these is "Stamp Coupling". We are invited to think of the "stamp" or template from which instances of a struct are created. Stamp coupling is considered (in the structured design world) one of the least bad kinds of coupling. Some coupling is inevitable, or else your system won't work, so one would like to choose the least bad ones, and (as of 1997) stamp coupling was a recommended choice.OK, so the thing about stamp coupling is that it implicitly couples together all the client modules of a struct. If one of them changes in a way that requires the shape of the struct to change then all the clients are impacted, even if they don't use the changed or new or deleted field. That actually doesn't sound so great, but if you're bashing out PL/1 it's probably about the best you can do. Stamp coupling is second best, with only "data" coupling as preferable: the direct passing of atomic values as arguments. Atomic data, eh? We'll come back to that.
However, the second worst kind of coupling that the gurus identified was "common coupling" What that originally meant was something like a COMMON block in Fortran, or global variables in C, or pretty much everything in a COBOL program: just a pile of values that all modules/processes/what have you can go an monkey with. Oops! Isn't that what a transfer object that comes straight out of a (single, system-wide) database ends up being? This is not looking so good now.
What about those atomic data values? What was meant back in the day was what we would now call native types:
int
, char
, that sort of thing. The point being that these are safe because it's profoundly unlikely that some other application programmer is going to kybosh your programming effort by changing the layout of int
. And the trouble with structs is that they can. And the trouble with transfer objects covered in getters and setters is that they can, too. But what if there were none...
Putting Your Head in a Bag Doesn't Make you Hidden
David Parnas helped us all out a lot when in 1972 he made some comments[pdf] on the criteria to be used in decomposing systems into modulesEvery module [...] is characterized by its knowledge of a design decision which it hides from all others. Its interface or definition was chosen to reveal as little as possible about its inner workings.Unfortunately, this design principle of information hiding has become fatally confused with the implementation technique of encapsulation.
If the design of a class involves a member
private int count
then encapsulating that behind a getter public int getCount()
hides nothing. When (not if) count gets renamed, changed to a big integer class, or whatever, all the client classes need to know about it. I hope you can see that if we didn't have any getters on our objects then this whole story unwinds and a nasty set of design problems evaporate before our eyes.
What was the point of all that, again?
John's simple sounding request: write code with no getters (and thoroughly test it, quite important that bit) is a miraculously clever one. He is a clever guy, but even so that's good.
Eliminating getters leads developers down a route that we have known is beneficial for thirty years, without really paying much attention. And the idea has become embedded in the first really new development technique to come along for a long time: TDD. What we need to do now is make sure that mocking doesn't get blurred in the same was as a lot of these other ideas have been.
First step: stop talking about mocking out infrastructure, start talking about mocks as a design tool.