A Naming Quandary in C#

Today I did some C# programming (my first, actually), and came across an interesting conundrum regarding the syntax of delegates.

Delegates are C#'s replacement for C++ pointers-to-(member)-functions. They are declared using a syntax a bit like a C-style function prototype. I tend to type curly bracket languages with a heavy C accent--I've been known to declare the argument of a Java main method as Object[] argv. So without thinking about it too much I typed my first ever delegate declaration something like this:
delegate void doStuff(object arg);
(all names have been changed to protect my employer's IP)

And then a new delegate object can be created, like this
doStuff sd = new doStuff(stuffDoer.doYourThing);
Where stuffDoer is an instance of a class with a method who's type signature (but not name) matches the type of the delegate. So there's a kind of duck typing for methods going on here.

Then, sd can be passed into a method declared like this:
public void doStuffUsing(stuffDoer doStuffWith){
doStuffWith(anObject)
}
in this way:
foo.doStuffUsing(sd);
Do you see my problem?

In the C/C++ world there is a desire to make the declaration and use of an identifier as similar as possible. That's why a pointer-to-member-function is declared (and initialized) like this:
void (StuffDoer::*fptr)(void*) = &StuffDoer::doYourThing;
and then used like this:
void SomeClass::doStuffUsing(void (StuffDoer::*fptr)(void*)){
StuffDoer sd;
(sd.*fptr)(this.pObject);
}
The C# syntax is clearer and more concise, by a long way, and the delegate has the nice (and safer) feature that it closes over the target object. In fact delegates have a property on them called Target that holds a reference to that object.

But the C++ form is more consistent, there's a direct 1:1 relationship between almost all the parts of the declaration, mention and use. The places where there isn't that relationship are marked by the dereference syntax.

Now, the syntax of the declaration of the delegate, delegate void doStuff(object arg) does look a lot like a use, like doStuffWith(anObject) which is cool. But what looks like a method name in the declaration is really the name of the type of delegate object. Thus mentions of the delegate type, such as
doStuff sd = new doStuff(stuffDoer.doYourThing)
look a bit fishy to me, for that reason.

So, I thought, maybe it should be delegate void DoStuff(object arg) so that creating the delegate looks like a constructor call:
DoStuff sd = new DoStuff(stuffDoer.doYourThing);
But these don't look right next to each other either, because the parameter to the constructor call is in no way related to what looks like corresponding parameter seen in the delegate declaration. Ouch.

Looking at the MSDN example (often highly dubious as that source can be), it looks to me as if there is anyway a convention in the C# world to name properties and methods with a leading capital anyway, which also looks very odd to me. Not to mention the type object being spelled with an "o" not an "O"

The Power of Protocols

There's this friend of mine who has an idea he likes to drop into conversations now and again about the significance of teh interweb having been invented by a physicist (or somebody like one), not a computer scientist (or somebody like one). A computer scientist, you see, would never have tolerated all those dangling pointers. But the possibility of dangling pointers is one of the things that makes the Web so easy to use, and from its ease of use comes its utility. No need to wonder if the implications of this have any significance for the continued utter irrelevance of Project Xanadu to the world. Ted Nelson has stated:
The Xanadu® project did not "fail to invent HTML". HTML is precisely what we were trying to PREVENT-- ever-breaking links, links going outward only, quotes you can't follow to their origins, no version management, no rights management.
(BTW, the complete absence of any semantically useful markup whatever on the source page for this quote is a miracle of irony)

Meanwhile a retrospective air has come across certain folks in the web world. Unusually for such a neophile bunch a certain amount of harking back (with suitably cynical commentary on the present day mixed in) has gone on. And so we are presented with what's being advertised as the first ever web page. One reddit contributor makes the observation that
it's also quite reassuring to see that the page still displays exactly as it was intended in 92. a few years later i was making sites with tables, and probably naively using ie-specific markup, which now look completely heinous. Another example of why adhering to standards increases the longevity of your data.
Indeed.

It seems as if info.cern.ch no longer resolves, but if it did then some of the original hyperlinks in that page would still work. That's amazing. Microsoft have also indulged in some digging around in the archives, and present their earliest homepage. Just an image, sadly, however, the wayback machine has got a very early corporate-stylee Microsoft home page with some links on it that (after the wayback munging is removed) do still work. They don't necessarily go where they say they go, although some of them do, and that's amazing squared.

Why am I amazed? Because I can't even begin to imagine going to the server end of most other distributed information systems (certainly none of the ones that I've built) and trying to use them in exactly the way that was intended ten or more years previously and expecting to get anything sensible back. Think about trying to connect to a remote object over an ORB with ten years between client and server. How does the Web mange to do the right thing?

In several ways: firstly, (as Berners-Lee points out, but I've lost the link) there is no reason that a URI should ever go stale, being neither the address of a location in space nor a pointer to an object. Secondly, and more importantly, the web works on an open text-based protocol. Ten year old and more email servers will provide a certain level of functionality by the same means. Thirdly, browsers are very lenient in how they interpret the HTML they receive, showing the user their best effort at rendering a page. That's a lot of sloppiness, a lot of flexibility, a lot of power.

All the rambling above is an excuse to link to this presentation (pdf) by Dick Gabriel, which covers this ground in more depth and with more eloquence, most especially the remarkable benefits that come from building a system out of components communicating via protocols rather than APIs.

Gabriel is a very interesting guy, and his other essays are well worth studying.

Hello

In the past I have said some rather harsh things about the blogosphere. And now here I am blogging. So what's changed? I stand by my earlier comments as an accurate description of my thoughts at the time. Since then, more and more people that I know and respect have begun blogging, and I have found more and more higher quality content in blogs generally. And have come to the conclusion that blogging has grown into something interesting and useful.

Also, I find that I'd like a place to punt ideas to a (potentially) wide audience that is both more under (my) control than, say, an egroup, but not requiring the discipline of my static web pages. This is it.