Entries in Specification (47)

Monday
Nov202006

Interval Arithmetic and Prolog

I taught myself Prolog back in 1984.  At the time I had a Sharp MZ-80B microcomputer which ran the CP/M operating system (like MS-DOS only slightly better).  I bought a Prolog interpreter and played around with it, using it to write a system for cataloging stock in the school science department where I worked as a lab technician.

Although I found the language intriguing, several things about it disappointed me.  One was the way it handled arithmetic.  Basically, when it came to arithmetic, Prolog just threw logic out of the window and did it in the same way that imperative languages did it.  I thought its designers had missed an opportunity here.  I had already come across interval arithmetic in volume 2 of Knuth's 'The Art of Computer Programming', the idea of which is that, instead of representing real values by single approximate numbers, you represent them by pairs of real numbers: a lower bound and an upper bound.  One of the problems with the single approximate number representation is that it almost always gives results that are approximations and hence are logically speaking incorrect.  The beauty of interval arithmetic is that it allows you to always make logically correct statements about real values.  This seemed to me to be a much more 'Prologish' way of doing arithmetic.

There is a minor complication in that, with some arithmetic operations, the interval returned is actually split into 2 or more intervals.  For example, if you divide 1 by the interval [-0.1,+0.1] you get a result which is the union of [-infinity,-10] and [+10, +infinity].  However, this can easily be handled by Prolog's back-tracking mechanism in which one of the intervals is returned first and the other is only returned if the first computation fails and back-tracks to this point again.

Nearly a decade later I came across the Motorola 68040 microprocessor and was surprised to find that it included hardware support for interval arithmetic in the form of special rounding modes (round towards plus infinity and round towards minus infinity).  Such modes are necessary to enable the lower and upper bounds to be determined efficiently.  These rounding modes were specified in the IEEE 754 standard for binary floating point arithmetic.  It pleased me that some experts thought interval arithmetic important enough to include provisions for it in a standard.  I later learnt that there is quite a lot of research in the use of interval arithmetic for certain kinds of computations.

Wednesday
Nov082006

Train Delayed

Yesterday evening my train from Ascot to Reading was delayed for over an hour by a signal failure. Fortunately I had my commuter's survival kit with me: thick pullover, woolly hat, pencil and paper. So I dressed up warm, sat down and set to work thinking about a problem that has been worrying me for a while: "What does the root class of an object-oriented system actually represent?". I already have what I think is a satisfactory way of understanding the non-root classes as specifications of sets of objects, but with root classes this breaks down: there seems to be no set of objects that root classes correspond to. I find theories which have exceptional cases unsatisfactory and, when using them, I often get distracted trying to generalise them to remove the exceptions. Anyhow, I didn't solve the root class problem that evening, but I did work through several possibilities far enough to eliminate them, and by the time my train came, though quite chilled, I was feeling pleased with myself. These days it is quite rare that I get the chance to think about a single problem for such a length of time.

In the first volume of his 'Narrow Roads of Gene Land', evolutionary biologist William Hamilton writes that he worked out some of his early theories while waiting for trains at Waterloo.  He was then working at Silwood Park near Ascot.  I wonder if, in the evenings, he waited for his return train at Ascot or at Sunningdale?  While I am waiting on platform 2, I like to imagine his ghost sitting on one of the benches over on platform 1 scribbling equations.

Wednesday
Oct112006

Talking about the Problem Domain instead of the Language

Steve Yegge has this to say about the programming language Ruby:

I have a lot of trouble writing about Ruby, because I find there's nothing to say. It's why I almost never post to the O'Reilly Ruby blog.  Ruby seems so self-explanatory to me.  It makes it almost boring; you try to focus on Ruby and you wind up talking about some problem domain instead of the language. I think that's the goal of all programming languages, but so far Ruby's one of the few to succeed at it so well.

I am not experienced enough with Ruby to confirm that this is true for that language but I do recognise the same feeling of falling through the language into the problem domain when using Leslie Lamport's TLA+.  I get this feeling much more with TLA+ than with Alloy, but then I am a lot less experienced with the latter.

Monday
Oct092006

The Origins of Formal Methods

One of the key events in the early development of formal methods was T. J. Dekker's invention of his mutual exclusion algorithm, not so much because of the practical value of the algorithm, which is rather limited, but because of the way Dekker developed it. Edsger Dijkstra tells the story in EWD 1303:

For economic reasons one wants to avoid in a multiprogramming system the so-called "busy form of waiting", in which the central processor is unproductively engaged in the waiting cycle of a temporarily stopped program, for it is better to grant the processor to a program that may actually proceed. This economic consideration was in 1961/62 a strong incentive for the introduction of the P/V-synchronization mechanism, which made it explicit to the scheduler when a program was no candidate for processor time. An in retrospect more profound incentive, however, came from an experience I had in 1959 at the Computation Department of the Mathematical Center in Amsterdam.

I had invented what, in that environment at least, was a new type of problem. Consider two cyclic processes, each consisting of an alternation of "a critical section" and "a noncritical section". Was it possible to synchronize these two processes such that at any moment at most 1 of these processes would be engaged in its critical section? The way in which the processes could communicate was by atomic reads and writes in a common store. After having convinced myself that --at least according to the State of the Art of those days-- the answer was not obvious, I broadcast the problem to the members of the Computation Department for their general amusement and distraction. It was given that each execution of a critical section would only take a finite amount of time and it was required that a process ready to execute would in due time get the opportunity to do so.

Many purported solutions were handed in, but on closer inspection they were all defective, and my friends realized that the problem was harder than they had suspected and that their simple "solutions" wouldn't do. Fortunately my friends didn't give up and they handed in "improved" versions of their earlier efforts. But as their designs became more and more complicated, it became harder and harder to construct a counterexample. I couldn't keep up with them and had to change the rules of this programming contest: besides a program I required a proof that the program had all the desired properties.

And then something profound and lovely happened. By analyzing by what structure of argument the proof obligations could be met, the numerical mathematician Th.J. Dekker designed within a few hours the above solution together with its correctness argument, and this settled the contest.

Proof-driven development back in 1959!  Much of Dijkstra's subsequent work can be seen as working out the consequences of the insight that programs should be developed with proof in mind.

Saturday
Oct072006

An Impossible Requirement

doors.JPG

I don't think the people who designed these notices for Southwest Trains realized that 'everything' includes 'doors', and that it is not possible to keep doors away from themselves!  (Maybe I have been spending too much writing formal specifications in Alloy and TLA+.)