A Day In The Life Of A Mathematics Undergraduate

Via Mathematics Weblog, here is an unusual promotional film for the 4-year MMath degree course at the University of Warwick:
Via Mathematics Weblog, here is an unusual promotional film for the 4-year MMath degree course at the University of Warwick:
I taught myself Prolog back in 1984. At the time I had a Sharp MZ-80B microcomputer which ran the CP/M operating system (like MS-DOS only slightly better). I bought a Prolog interpreter and played around with it, using it to write a system for cataloging stock in the school science department where I worked as a lab technician.
Although I found the language intriguing, several things about it disappointed me. One was the way it handled arithmetic. Basically, when it came to arithmetic, Prolog just threw logic out of the window and did it in the same way that imperative languages did it. I thought its designers had missed an opportunity here. I had already come across interval arithmetic in volume 2 of Knuth's 'The Art of Computer Programming', the idea of which is that, instead of representing real values by single approximate numbers, you represent them by pairs of real numbers: a lower bound and an upper bound. One of the problems with the single approximate number representation is that it almost always gives results that are approximations and hence are logically speaking incorrect. The beauty of interval arithmetic is that it allows you to always make logically correct statements about real values. This seemed to me to be a much more 'Prologish' way of doing arithmetic.
There is a minor complication in that, with some arithmetic operations, the interval returned is actually split into 2 or more intervals. For example, if you divide 1 by the interval [-0.1,+0.1] you get a result which is the union of [-infinity,-10] and [+10, +infinity]. However, this can easily be handled by Prolog's back-tracking mechanism in which one of the intervals is returned first and the other is only returned if the first computation fails and back-tracks to this point again.
Nearly a decade later I came across the Motorola 68040 microprocessor and was surprised to find that it included hardware support for interval arithmetic in the form of special rounding modes (round towards plus infinity and round towards minus infinity). Such modes are necessary to enable the lower and upper bounds to be determined efficiently. These rounding modes were specified in the IEEE 754 standard for binary floating point arithmetic. It pleased me that some experts thought interval arithmetic important enough to include provisions for it in a standard. I later learnt that there is quite a lot of research in the use of interval arithmetic for certain kinds of computations.
Mark Dominus describes trying to convince some people that, while i (the square root of minus one) and -i are not equal to each other, they are mathematically indistinguishable:
... 1 has two square roots that are not interchangeable in this way. Suppose someone tells you that a and b are different square roots of 1, and you have to figure out which is which. You can do that, because of the two equations a2 = a, b2 = b, only one will be true. If it's the former, then a=1 and b=-1; if the latter, then it's the other way around. The point about the square roots of -1 is that there is no corresponding criterion for distinguishing the two roots. This is a theorem. ...
What struck me about Mark's account was that failed to mention that this a symmetry and that symmetries such as this one are important in abstract algebra.
Over at The Universe of Discourse, Mark Dominus has a nice article on Whitehead and Russell's Principia Mathematica from the viewpoint of a modern computer programmer. One quotation:
... The notation is somewhat obscure, because mathematical notation has evolved substantially since then. And many of the simple techniques that we now take for granted are absent. Like a poorly-written computer program, a lot of Principia Mathematica's bulk is repeated code, separate sections that say essentially the same things, because the authors haven't yet learned the techniques that would allow the sections to be combined into one.