Books of Note

Practical Common
LispThe best intro to start your journey. Excellent coverage of CLOS.

ANSI Common
LispAnother great starting point with a different focus.

Paradigms of Artificial Intelligence
ProgrammingA superb set of Lisp examples. Not just for the AI crowd.

Thursday, October 21, 2004

More on Random Testing 

What timing. I just saw that somebody has written a program to generate randomly bad HTML pages which can be used to test browsers on how they handle malformed input. As you might expect, most were bad. Surprisingly, MS IE did the best of the bunch. The speculation is that it was tested with just such a tool previously. My guess would be that when Microsoft got security religion, that was the sort of thing they started to do.


Wednesday, October 20, 2004

Random Testing 

I have run across a few references to Paul Dietz's Common Lisp test suite over the past few months, the latest being from Juho Snellman's blog where he describes how the suite caught some errors in his new SBCL register allocator code.

Paul describes his work on the suite in his own blog, the 13-September entry of which had this to say:

To go beyond simple 'does function FOO do X when called on arguments Y' kinds of tests, I've also started experimenting with a random tester. I've taken the approach of William McKeeman (favorite quote: "Excellent testing can make you unpopular with almost everyone.") I generate random lisp forms that are guaranteed to be well-defined when a set of free variables have integer values. The forms are wrapped in two lambda expressions, one with type declarations for the free variables and one in which optimizations are inhibited. The functions are compiled and their outputs compared on a set of random inputs.

This approach is very simpleminded (and a lot easier to do in Lisp than in C, as McKeeman did), but it immediately found new bugs in every lisp implementation on which I tried it. The beauty of this approach is that it exploits increasingly cheap CPU cycles to save the increasingly relatively expensive programmer/tester cycles.

In my experience, random testing of just about anything is a really good thing. I have always found it effective at finding those elusive corner cases that are so difficult for people to envision themselves. The computer can generate such perverse test cases that it really makes you wonder why people don't do this more often.

For instance, when I was at HP in the late 1980s working on the PA-RISC series 700 workstation products, I knew one of the engineers in the CPU group who was designing the next round of IEEE floating point co-processors for PA-RISC. If you aren't fully aware, the IEEE FP spec is very precise, but has a number of very hairy corner cases. In particular, things like rules for rounding in the last bit of precision are called out very clearly. This engineer, who I always thought was pretty smart, built himself a software simulation of his chip design and proceeded to generate random test vectors for the new design. Simultaneously, he ran those same vectors through his trusty Motorola 68K FP copro and had the system stop and dump the vectors that produced any mismatches.

When he first fired up the simulation, it ran for only a few seconds before it found a problem. He fixed the problem and restarted. It ran for a minute. He fixed the problem and restarted. It ran for a few minutes. He fixed the problem and restarted. It ran for an hour or two. He fixed the problem and restarted. It ran for 24 hours. He fixed the problem and restarted. It then ran non-stop.

Note that if he had just done this, he actually couldn't have been sure that his processor was bug-free, only that it had exactly the same bugs as his 68K FPU. To ensure that he didn't miss a bug, he also compared with earlier PA-RISC FPUs.

What is most interesting about this example is that if Intel had done the same basic testing on the original Pentium, they would not have suffered the massive FPU bug debacle in the 1990s.

I have used the same technique in testing network protocol implementations. Protocol decoding is notoriously error-prone and bugs here can result in huge security holes. In this case, I try to generate real data streams and then write a program to corrupt the data slightly. Often, because protocols involve various packet integrity algorithms, you have to generate something that will make it all the way through different parts of the stack and not just get thrown out at the lowest levels, otherwise you aren't testing much other than the lowest layers of your system. For instance, if you capture IP packets and corrupt them, recompute the IP and TCP checksums such that things don't get bounced right at those checks but actually make it to different parts of the stack.

Others have successfully used the technique to test UNIX (and here) and Windows utilities.

In short, I don't think I have ever seen a case where a random input tester didn't reveal a bug of some sort, unless the system had already been subjected to such a tester previously.

Update: Here is a good description of William McKeeman's work using random testing on the DEC C compiler. I think that Paul's blog referenced this but it dropped out when I did the cut/paste and I think Paul's link was also stale.


Principia Lispia 

The other day, I stumbled onto Paul Graham's ILC 2003 talk describing the development of Arc (probably after reading Bill Clementson's blog, where all this sort of thing shows up first). Graham is trying to develop Arc from a small set of foundational, axiomatic functions on which the whole language will then rest. I won't go into everything here, as that isn't the subject of my posting, but read Graham's talk if you are interested in Arc development. The thing that caught my eye was that Graham mentioned that McCarthy had done the same thing with Lisp originally. Googling for "John McCarthy" got me McCarthy's home page which has links to the original 1960 paper in which John described LISP (note the caps).

Anyway, I didn't have time to read the paper right then and there, but took it with me on the flight to London last week. When I finally started reading on the airplane, I was pretty blown away. Suffice it to say, if you are a Lispnik, you should read this paper. This paper impacted me with a number of things:

This was a pretty amazing piece of work, particularly if you consider that in 1960 people were still doing most everything in machine/assembly language and perhaps in FORTRAN. McCarthy was thinking so far outside the box, it's just beyond comprehension. It's also staggering that the ideas were so well formed at that point to have survived to the present day almost unchanged (caddr and friends, for instance).

Effectively, we have here a work that is as foundational for computer science as Newton's PhilosophiƦ Naturalis Principia Mathematica was for physics and calculus or Whitehead and Russel's Principia Mathematica was for mathematics. I'm not sure if there is a translation of "Lisp" into Latin (I took ancient Greek at university), but if there is, surely we have here a Principia Lispia. And in 34 pages, no less!


Thursday, October 14, 2004

Last Train to London... 

I have been in London all this week for business and having a great time. I must say that I love London. It's a fun city with a lot of interesting things for an American to do. Frankly, I love it for the people watching and find the Brits great fun. My wife and I have quite a few Brit friends back home in California and it's interesting (to say the least) to find a whole country populated by them.

I was sitting in the pub tonight for dinner, and so happened to be reading Neal Stephenson's Cryptonomicon. For those that don't know, Cryptonomicon involves great stretches that deal with WWII and Alan Turing's work on cryptography. Much of that storyline takes place in London. In any case, the situation tonight was quite funny, reading a passage along with my third pint, letting escape a muffled chortle at Lawrence Waterhouse's observations of Brits that I had just witnessed earlier today.

It's interesting to be watching British political TV programs on BBC. The Brits are far more in tune with American politics than I would have suspected. Yes, it didn't surprise me that they know the names of the candidates and have an opinion about them. It did surprise me that BBC carried sections of the presidential debates and that they were covering the rough electoral vote count and even understood the nuances of the US elections system dealing with the electoral college. I can't, for instance, tell you the first thing about how a British MP is elected. I did score some points with my Brit colleagues having watched Prime Minister's questions on CSPAN a few times. At least I wasn't totally ignorant.

In any case, it is no surprise to me that there is a good-sized Lisp community in London. Smart people, fun city.

Update: Did a bit of editing around this one. Needless to say, I wrote it in London after the three pints in the pub and not all of it was really cogent. ;-)


This page is powered by Blogger. Isn't yours?