I would expect a daily handover would do much to raise visibility to you of what the outsourcers are doing--and visibility is the first step to control
Visibility and Control
Something I said recently that made the listener's ears prick up, re use of (distributed) standups to manage outsourced development:
MDA => Agile!?
Came across this article regarding the MDA. It took me long enough to get my head around the idea that it's the Model Driven Architecture, rather than just any old architecture that's driven by models. Except that Haywood tells us in the piece that actually there are two MDAs depending on which tool chain you buy. The difference is that one set of vendors builds tools to support an elaborative workflow from the Computation Independent Model through to a Platform Specific Model and another a translational one. Eh? Translation vs elaboration was one of the big arguments back in the days when there was an active marketplace for OO methodologies. Seems as if the MDA is intended to be transformational. Or possibly elaborationalist, depending it seems on whom you ask.
Steve Cook has this[pdf] (amongst other things) to say about MDA. Now, Steve was partly responsible for the Syntropy method, which features three kinds of model: Essential, Specification and Implementation--reflecting the view that one single modelling vocabulary won't do for capturing knowledge about the world, specifying a software system to deal with it or figuring out how to build such a system. That's a little bit like the MDA's distinction between Computation Independent Models, Platform Independent Models and Platform Specific Models. Kinda. But it's not clear what these models are really supposed to be written in. UML? The MDA is an OMG offering, and UML is the OMG's modelling language.
Unfortunately, it's sometimes quite hard to know what a given UML model is supposed to mean (without, ironically enough, showing the corresponding code), and so it's hard to see how automated translation of UML models is going to go anywhere interesting. During the last run at this sort of thing, the CASE tool boom of around ten years ago, I had a job in a C++ shop where all code was round-tripped through a leading tool. It wasn't unusual to have to go through certain hoops to get this tool to generate code that compiled at all, never mind do what I wanted it to do. Maybe the tools are a lot better now, although "OTI" Dave Thomas's fairly recent assessment of the MDA space makes me think not.
Meanwhile, Haywood makes this rather provocative statement:
Certainly the idea of keeping your whole model and code stack in sync at all times seems like an Agile idea. I'd expect that to be more of a strain on those folks who in the past have taken to producing huge UML models that no-one ever looks at, and not so much the sketchers, so maybe that does drive you in the Agile direction.
Steve Cook has this[pdf] (amongst other things) to say about MDA. Now, Steve was partly responsible for the Syntropy method, which features three kinds of model: Essential, Specification and Implementation--reflecting the view that one single modelling vocabulary won't do for capturing knowledge about the world, specifying a software system to deal with it or figuring out how to build such a system. That's a little bit like the MDA's distinction between Computation Independent Models, Platform Independent Models and Platform Specific Models. Kinda. But it's not clear what these models are really supposed to be written in. UML? The MDA is an OMG offering, and UML is the OMG's modelling language.
Unfortunately, it's sometimes quite hard to know what a given UML model is supposed to mean (without, ironically enough, showing the corresponding code), and so it's hard to see how automated translation of UML models is going to go anywhere interesting. During the last run at this sort of thing, the CASE tool boom of around ten years ago, I had a job in a C++ shop where all code was round-tripped through a leading tool. It wasn't unusual to have to go through certain hoops to get this tool to generate code that compiled at all, never mind do what I wanted it to do. Maybe the tools are a lot better now, although "OTI" Dave Thomas's fairly recent assessment of the MDA space makes me think not.
Meanwhile, Haywood makes this rather provocative statement:
Adopting MDA also requires a move towards agile development practices. While like many I'm an advocate of agile processes, it's still foreign to many organizations. The need for agile development follows from the fact that MDA requires models to be treated as software, hence they are the "as-is" view. Those organizations that used UML only for blueprints or sketches (Fowler's analysis [4.]) will find that MDA does not permit the use of UML in that way.I suspect that it would come as a surprise to many Agile practitioners that adopting the MDA would make an organisation more agile (and that this is because you can't use UMLAsSketch!) But let's examine the claim in a little more detail.
Certainly the idea of keeping your whole model and code stack in sync at all times seems like an Agile idea. I'd expect that to be more of a strain on those folks who in the past have taken to producing huge UML models that no-one ever looks at, and not so much the sketchers, so maybe that does drive you in the Agile direction.
Extremely Agile
I have to be careful here, because I'm not just an Agilist, but an Extreme Programmer, so even other Agilists sometimes think I have some very strange ideas about development. The principles of Agile development compiled by the authors of the Agile Manifesto don't seem to say anything that would stop you from using MDA, but they do suggest that, not only do you maintain your UML CIM in-sync with your code in the reverse direction (which is what I understand Haywood to be talking about--don't let the code run away from the model) but that you regenerate the whole system CIM->PIM->PSM->code->deploy frequently. The principles suggest:Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.Well, yeah. Actually, my preference is for a much shorter timescale: weekly releases do not seem too frequent to me. And I want to be able to release on any given day. And I want all my changes (to whatever artifact) reflected in a deployable system ASAP. By which I mean, within minutes. Beck recommends a Ten Minute Build, which in MDA terms would seem to mean running the whole workflow (modulo dependency analysis) all the way through and executing a suite of tests in ten minutes. Should make the hardware vendors happy.
Peer Review
Just recently I received an email from the organizers of a new conference concerned with the technique of peer review. This events seems to be largely concerned with the review process applied to scholarly papers but it got me thinking about "review by peers", something that I do a lot of.
As a practitioner of Extreme Programming of course I prefer to write production code as one member of a pair. And pair-programming is explicitly an activity needing an effort towards a peer relationship: the higher-powered programmer in the pair has to throttle back a little, and the lower-powered one stretch. Peer-review of papers usually involves a panel of reviewers for each submission, and if you rotate people through pairs often enough (which might turn out to be very often indeed if this experience report[pdf] is anything to go by) then it won't be long before the code changes you make have been thoroughly reviewed by the time it gets checked in.
Then again, during the planning game I like to use Delphi techniques to explore possibilities, reach consensus, and obtain estimates. Delphi works by having a group of, as they are called in the technique, "experts", who are by definition peers, iteratively review a proposal or (answer to a) question. Interestingly, the anonymous nature of the input to a Delphi review would seem to fix some of the problems that occur when using less formal means to reach a consensus.
In development shops that don't do pairing, but do care about quality all the same, you'll often come across code review techniques of one sort or another. Fagan Inspection seems to be the best way to get this done with small groups, although my experience it is ferociously expensive and although produces excellent results perhaps doesn't offer great value-for-money. YMMV. If you have a (potentially) large group available, then the Open Source route is also a good one. Can be very slow, but also very effective, and (like the National Health Service), free of cost at the point of delivery...
And then there's reviewing conference sessions and papers and journal papers and drafts of books. This can be pretty excruciating work. As a reviewer, I whish more authors would heed this advice when preparing their submissions. As an author, I whish more conferences worked along the lines of the PLoPs, which are all about very intensive, hands-on, in the room, right before your eyes peer review.
Although it is (in)famously possible for a clever and resourceful author to get carefully crafted utter tosh published in a supposedly serious journal, there have also been cases of what seems to be genuine fraud (as it were) getting past the review process in vary hard science fields indeed. One has to wonder if this isn't, as with the remarkable ease with which dubious patents may be obtained these days, mostly due to reviewers being snowed under.
On the other hand, the organisers of that conference on peer review I mentioned up at the top are themselves notorious for accepting nonsense papers without review. In a spirit of enquiry I replied politely to the email suggesting that I probably wouldn't be able to attend the conference, but that I would be interested in helping out with the proposed book on peer review--as a reviewer. We'll see what comes of that.
As a practitioner of Extreme Programming of course I prefer to write production code as one member of a pair. And pair-programming is explicitly an activity needing an effort towards a peer relationship: the higher-powered programmer in the pair has to throttle back a little, and the lower-powered one stretch. Peer-review of papers usually involves a panel of reviewers for each submission, and if you rotate people through pairs often enough (which might turn out to be very often indeed if this experience report[pdf] is anything to go by) then it won't be long before the code changes you make have been thoroughly reviewed by the time it gets checked in.
Then again, during the planning game I like to use Delphi techniques to explore possibilities, reach consensus, and obtain estimates. Delphi works by having a group of, as they are called in the technique, "experts", who are by definition peers, iteratively review a proposal or (answer to a) question. Interestingly, the anonymous nature of the input to a Delphi review would seem to fix some of the problems that occur when using less formal means to reach a consensus.
In development shops that don't do pairing, but do care about quality all the same, you'll often come across code review techniques of one sort or another. Fagan Inspection seems to be the best way to get this done with small groups, although my experience it is ferociously expensive and although produces excellent results perhaps doesn't offer great value-for-money. YMMV. If you have a (potentially) large group available, then the Open Source route is also a good one. Can be very slow, but also very effective, and (like the National Health Service), free of cost at the point of delivery...
And then there's reviewing conference sessions and papers and journal papers and drafts of books. This can be pretty excruciating work. As a reviewer, I whish more authors would heed this advice when preparing their submissions. As an author, I whish more conferences worked along the lines of the PLoPs, which are all about very intensive, hands-on, in the room, right before your eyes peer review.
Although it is (in)famously possible for a clever and resourceful author to get carefully crafted utter tosh published in a supposedly serious journal, there have also been cases of what seems to be genuine fraud (as it were) getting past the review process in vary hard science fields indeed. One has to wonder if this isn't, as with the remarkable ease with which dubious patents may be obtained these days, mostly due to reviewers being snowed under.
On the other hand, the organisers of that conference on peer review I mentioned up at the top are themselves notorious for accepting nonsense papers without review. In a spirit of enquiry I replied politely to the email suggesting that I probably wouldn't be able to attend the conference, but that I would be interested in helping out with the proposed book on peer review--as a reviewer. We'll see what comes of that.
Something Fishy
Just recently I've come across several uses in quick successon of would-be aphorisms to do with fish. It seems to be considered common knowledge that fish don't have a word for water, and that you couldn't ask a fish what water tastes like, and so forth. Well, you could argue, as this poem does, that it's perhaps likely that fish (supposing they have language) wouldn't have exactly one word for water. But on the face of it, the claims are nonsense. We have a word for air. And I can ask you what air smells like.
Generally, the folks using these sayings are making some point about people not being able to articulate thoughts, or even have thoughts, about their setting, about the context within which their lives are lead. Often with satirical intent. And ther's something to the idea that folks don't notice their surroundings much. But then maybe that depends on where you were educated, to some extent.
You've probably seen this picture, generally known in the West as The Great Wave. Folks writing about fractals and the scale-free geometry of natural shapes (something also found in other, perhaps surprising, places) often like to refer to The Great Wave and other prints by Hokusai, or to things that resemble it. But the subject (in the Western sense) of that print is not the wave. The Great Wave is one of a series called 36 Views of Mt Fuji. The funny thing is that in The Great Wave, as in quite a few of the other 46 [sic] prints, Fuji-san is hardly visible. It's there, but way off in the distance, a blip in the horizon.
What significance has this? Maybe a lot, if you hyave anyhting to do with collaborating across Eastern and Western cultures, and if this the conclusions in this book hold up.
Generally, the folks using these sayings are making some point about people not being able to articulate thoughts, or even have thoughts, about their setting, about the context within which their lives are lead. Often with satirical intent. And ther's something to the idea that folks don't notice their surroundings much. But then maybe that depends on where you were educated, to some extent.
You've probably seen this picture, generally known in the West as The Great Wave. Folks writing about fractals and the scale-free geometry of natural shapes (something also found in other, perhaps surprising, places) often like to refer to The Great Wave and other prints by Hokusai, or to things that resemble it. But the subject (in the Western sense) of that print is not the wave. The Great Wave is one of a series called 36 Views of Mt Fuji. The funny thing is that in The Great Wave, as in quite a few of the other 46 [sic] prints, Fuji-san is hardly visible. It's there, but way off in the distance, a blip in the horizon.
What significance has this? Maybe a lot, if you hyave anyhting to do with collaborating across Eastern and Western cultures, and if this the conclusions in this book hold up.
Subscribe to:
Posts (Atom)