Entries in Computing (187)

Saturday
Jul082006

Are Tests really Specifications?

Martin Fowler, Jeff Langr and Bob Martin seem very keen to defend the idea that tests should be regarded as specifications.  To me, this seems very strange, almost bizarre.  

Don't get me wrong, I recognize the value of automated test suites, but it never occurred to me that anyone would regard them as specifying the software they test.  This idea seems to show a fundamental misunderstanding of the relationships between specifications, programs and tests, and it blurs important distinctions.

Fowler uses the term 'Specification by Example'  but what he is really talking about is exploring the specification by generating examples.  This is a good and valid activity, but it think it is incorrect to call the generated examples the specification

The exponentiation example that Langr presents is great as a suite of tests, but as a specification it is rather a poor one.  Firstly, it is Java-specific (it would have to be rewritten to apply it to a Python program) but the concept of exponentiation is independent of any programming language.  Secondly, although Langr criticises the Sun specification for being too long, his is about the same length and doesn't even cover floating point exponents as the Sun one does.  Thirdly, his specification fails to actually define what exponentiation is (this is in spite of him criticising the Sun one for the same failing).  He gives lots examples (24 = 16, (-2)3 = -8, and so on), but these do not amount to a definition.  If you want to define exponentiation you really have to use equations, for instance: a0 = 1, an = a*an-1 and so on.  Incidentally, this enables you to define floating point exponents in terms of integer exponents quite simply by using the equivalence between between y = am/n and ym = an (for example:  y = 103.14159 is equivalent to y100000 = a 314159).

Fowler, Langr and Martin seem to be trying to use 'Tests are Specifications'  to support Test-Driven Design (TDD).  I think this is a mistake on two counts: firstly, tests are very poor specifications and, secondly, TDD does not need this support - it stands perfectly well without it. 

Calling tests 'specifications' is contrived, misleading, and devalues the concepts of specification and testing.  Why then should obviously very intelligent and experienced people want to do this?  I can only assume that it is a result of political forces within the XP/Agile movement.  Maybe their clients asked for written specs, so the Agilists looked around for the nearest thing that they actually do write, found tests, and decided to call them specifications.

Tuesday
Jul042006

McAfee Internet Security Suite 8.0

I have been using McAfee Internet Security Suite on Windows XP since 2004.  Although it is a little awkward to set up and a little intrusive in its operation, I have always managed to get it working in a reasonably acceptable way, that was, until I upgraded to version 8.0.  This version seems to be flakey and badly designed.  In particular during installation I managed to get Windows XP into a state in which it would repeatedly reboot during startup.  Furthermore, it does not appear to be possible to set up the Privacy Service so that different Windows XP users have different Privacy Service levels without having forcing some of them to log in to the Service each time they log into XP.  I would have expected the Privacy Service to use the Windows XP password authentication and I find it really unacceptable that it does not.  Consequently,  I have now uninstalled the Privacy Service.  I now have a usable installation which consists of just McAfee VirusScan and Personal Firewall Plus (I have never felt the need for the SpamKiller). 

Something is wrong.  For software as important as this, I shouldn't have to fight to get it installed.

Friday
Jun302006

Poor Abstractions as a cause of Software Failures

From Software Abstractions - Logic, Language and Analysis by Daniel Jackson (MIT Press, 2006):

The case for formal methods is often based on the prospect of catching subtle bugs that elude testing.  But in practice the less glamorous analyses that are applied repeatedly during the development of an abstraction and which keep the formal model in line with the designer's intent, are far more important.  Software, unlike hardware, rarely fails because of a single tiny but debilitating flaw.  In almost all cases, software fails because of poor abstractions that lead to a proliferation of bugs, one of which happens to cause the failure.

Saturday
Jun242006

Design by Guesswork

Over at Aftermarket Pipes Tim Lesher has a nice explanation of some strange behaviour of  Windows Notepad.  However, I disagree with him when he says "we can't even blame Notepad: it's a limitation of Windows itself".  The documentation for the IsTextUnicode Windows API call is quite explicit that the call makes an informed guess and might give a wrong answer.  The mistake made by the designers of Notepad was to hide from the user the fact that a guess was being made.  Lesher approvingly quotes Tim Peters'  "In the face of ambiguity, refuse the temptation to guess", to which I would add "but if you have to then don't hide the fact from the user".

This all reminds me of the following story reported by Edsger Dijkstra in EWD 466(pdf) :

...  Niklaus [Wirth] told a terrible story about CDC software.  With 10 six-bit characters (from an alphabet of 63) packed into one word, CDC used the 64th configuration to indicate "end of line"; when for compatibility reasons a 64th character had to be added, they invented the following convention for indicating the end of a line: two successive colons on positions 10k+8 and 10k+9 --a fixed position in the word!-- is interpreted as "end of line".  The argument was clearly that colons hardly ever occur, let alone two successive ones!  Tony [Hoare] was severely shocked: "How can one build reliable programs on top of a system with consciously built-in unreliability?"  I shared his horror; he suggested that at the next International Conference on Software Reliability a speaker should just mention the above dirty trick and then let the audience think about its consequences for the rest of his time slice! ... 

Thursday
May252006

Beware the Optimistic Engineer

At the end of EWD500, an account of how he discovered and then corrected a bug in the On-the-Fly Garbage Collection algorithm, Edsger Dijkstra draws the following conclusion:

... for the design of multiprocessor installations we cannot rely on the traditional approach of the optimistic engineer, who, when the design looks reasonable, puts it together to see if it works.

After having just spent several weeks tracking down and fixing a bug in some multiprocessor software that I wrote myself, I now recognize myself to be an 'optimistic engineer'.  I am teaching myself to use Alloy in order to keep my 'optimism' under control.