Entries in Computing (187)

Wednesday
Nov082006

Train Delayed

Yesterday evening my train from Ascot to Reading was delayed for over an hour by a signal failure. Fortunately I had my commuter's survival kit with me: thick pullover, woolly hat, pencil and paper. So I dressed up warm, sat down and set to work thinking about a problem that has been worrying me for a while: "What does the root class of an object-oriented system actually represent?". I already have what I think is a satisfactory way of understanding the non-root classes as specifications of sets of objects, but with root classes this breaks down: there seems to be no set of objects that root classes correspond to. I find theories which have exceptional cases unsatisfactory and, when using them, I often get distracted trying to generalise them to remove the exceptions. Anyhow, I didn't solve the root class problem that evening, but I did work through several possibilities far enough to eliminate them, and by the time my train came, though quite chilled, I was feeling pleased with myself. These days it is quite rare that I get the chance to think about a single problem for such a length of time.

In the first volume of his 'Narrow Roads of Gene Land', evolutionary biologist William Hamilton writes that he worked out some of his early theories while waiting for trains at Waterloo.  He was then working at Silwood Park near Ascot.  I wonder if, in the evenings, he waited for his return train at Ascot or at Sunningdale?  While I am waiting on platform 2, I like to imagine his ghost sitting on one of the benches over on platform 1 scribbling equations.

Wednesday
Oct112006

Talking about the Problem Domain instead of the Language

Steve Yegge has this to say about the programming language Ruby:

I have a lot of trouble writing about Ruby, because I find there's nothing to say. It's why I almost never post to the O'Reilly Ruby blog.  Ruby seems so self-explanatory to me.  It makes it almost boring; you try to focus on Ruby and you wind up talking about some problem domain instead of the language. I think that's the goal of all programming languages, but so far Ruby's one of the few to succeed at it so well.

I am not experienced enough with Ruby to confirm that this is true for that language but I do recognise the same feeling of falling through the language into the problem domain when using Leslie Lamport's TLA+.  I get this feeling much more with TLA+ than with Alloy, but then I am a lot less experienced with the latter.

Monday
Oct092006

The Origins of Formal Methods

One of the key events in the early development of formal methods was T. J. Dekker's invention of his mutual exclusion algorithm, not so much because of the practical value of the algorithm, which is rather limited, but because of the way Dekker developed it. Edsger Dijkstra tells the story in EWD 1303:

For economic reasons one wants to avoid in a multiprogramming system the so-called "busy form of waiting", in which the central processor is unproductively engaged in the waiting cycle of a temporarily stopped program, for it is better to grant the processor to a program that may actually proceed. This economic consideration was in 1961/62 a strong incentive for the introduction of the P/V-synchronization mechanism, which made it explicit to the scheduler when a program was no candidate for processor time. An in retrospect more profound incentive, however, came from an experience I had in 1959 at the Computation Department of the Mathematical Center in Amsterdam.

I had invented what, in that environment at least, was a new type of problem. Consider two cyclic processes, each consisting of an alternation of "a critical section" and "a noncritical section". Was it possible to synchronize these two processes such that at any moment at most 1 of these processes would be engaged in its critical section? The way in which the processes could communicate was by atomic reads and writes in a common store. After having convinced myself that --at least according to the State of the Art of those days-- the answer was not obvious, I broadcast the problem to the members of the Computation Department for their general amusement and distraction. It was given that each execution of a critical section would only take a finite amount of time and it was required that a process ready to execute would in due time get the opportunity to do so.

Many purported solutions were handed in, but on closer inspection they were all defective, and my friends realized that the problem was harder than they had suspected and that their simple "solutions" wouldn't do. Fortunately my friends didn't give up and they handed in "improved" versions of their earlier efforts. But as their designs became more and more complicated, it became harder and harder to construct a counterexample. I couldn't keep up with them and had to change the rules of this programming contest: besides a program I required a proof that the program had all the desired properties.

And then something profound and lovely happened. By analyzing by what structure of argument the proof obligations could be met, the numerical mathematician Th.J. Dekker designed within a few hours the above solution together with its correctness argument, and this settled the contest.

Proof-driven development back in 1959!  Much of Dijkstra's subsequent work can be seen as working out the consequences of the insight that programs should be developed with proof in mind.

Wednesday
Sep132006

Programming with Warts

From a post by Bruce Eckel at Artima Forums:

One of the big reasons for my renewed interest in Ruby (besides the features it has that seem to make it a bit easier to create domain-specific languages) is because of Matz's decision to remove the Perlisms in Ruby. Any language (Python is the only other one I know of) that's willing to REMOVE warts (Java just seems to keep adding warts on existing warts) immediately becomes much more interesting in my perception.

Refactoring and simplification can be applied to programming languages as well as to programs.

Wednesday
Aug302006

1 + 1 = 2

Over at The Universe of Discourse, Mark Dominus has a nice article on Whitehead and Russell's Principia Mathematica from the viewpoint of a modern computer programmer.  One quotation:

... The notation is somewhat obscure, because mathematical notation has evolved substantially since then. And many of the simple techniques that we now take for granted are absent. Like a poorly-written computer program, a lot of Principia Mathematica's bulk is repeated code, separate sections that say essentially the same things, because the authors haven't yet learned the techniques that would allow the sections to be combined into one.