Tuesday, July 27, 2004
I went to my first Bay Area Lispniks meeting on Sunday in Berkeley. We had a nice lunch at Priya, an Indian restaurant on San Pablo street, just off University. I rode BART up from Fremont, where I live, and did the 15-minute walk from the station to the restaurant. On the BART ride, I had Boston blasting on my Sony MD player and was reading Interactive Programming Environments, an old book that has a bunch of interesting papers on Lisp, including one by Greenblatt, Knight, Holloway, Moon, and Weinreb about the original MIT Lisp Machine that complements very well the MIT CADR research report. I figured the Boston was appropriate since Tom Scholz went to MIT.
The book is old, copyright 1984, but contains a whole bunch of interesting papers on various programming environments, particularly Lisp environments like Interlisp and the Lisp Machine. Where the MIT CADR report stays pretty low level, describing the micro-architecture of the machine, the Lisp Machine paper in the book describes more of the high-level stuff. Tayssir John Gabbour recommended the book to me a few weeks ago, pointing out that you can get it used on Amazon for almost nothing. I paid $3.95 + $3.49 shipping & handling. Not a bad deal for 600 pages of largely Lisp history. Note that this book was written long before the AI winter, so the outlook is very positive on Lisp and the other various environments described.
Anyway, after the BART ride and walk, I ended up at Priya, found the guys, and everybody started chatting. About 10 or 12 people attended. I was one of the new guys, of which there were a couple more. I had the good fortune to end up sitting between Peter Seibel and Carl Shapiro. We talked a bit about Peter's book, which seems to be coming along surely, if slowly. Carl is working on a Win32 port of CMUCL and had some interesting stories to tell about the seedy underbelly of CMUCL. Let's just say that there is a bunch of code in the runtime that makes some very interesting assumptions about the overall OS architecture on which it is running. Most of the time, those assumptions hold true on Unix-like machines, but Win32 does some things quite a bit differently. Needless to say, Carl seems to be making good progress. We ended up discussing Win32 register calling conventions for quite a while, too. I had to strain back to my Win32 programming days.
At one point, Peter Seibel made the comment that he thought spam would make email all but unusable and that started a lively discussion that lasted for most of the meal. Each end of the long table started indepedent debates. Rob Warnock had some interesting stories about battling spammers and dealing with the clueless people at Network Solutions about DNS issues.
After a couple of hours of good fun, people started to drift off and I trudged back to the BART station, where it was more blaring Boston and reading about Lisp machines. Overall, a very fun time. Needless to say, this is my first face-to-face encounter with a bunch of people who are into Lisp. I was really impressed by the caliber of people who showed up. Everybody was really smart and had some fun stories to go along with things. Opinions were flowing freely. I'm really looking forward to next time.
Wednesday, July 21, 2004
I posted a review of Paul Graham's Hackers and Painters: Big Ideas from the Computer Age on the Finding Lisp book review page. Enjoy!
Summary: This is a great book. Even if you have already the essays on Graham's web site, you'll enjoy this book.
Saturday, July 17, 2004
Okay, so here's a weird idea. I was still thinking about building a Lisp machine using an FPGA when it struck me that the embedded world would be an interesting place to use some of this technology. Java has found a lot of life embedded in cell phones, for instance. I would think that a Lisp machine would be easier to program, far more debuggable, etc. There was a lot of work for a while on processors that would execute Java byte code. Why not go in a similar direction, but using Lisp? The key in an environment like that is to keep the power dissipation per unit of computing work to an absolute minimum. Being able to compute tag checks in parallel to computation and avoiding boxing/unboxing operations would be more efficient, power-wise, I would think, than trying to emulate all that on a standard processor.
I started this blog about a month before Blogger introduced a massive upgrade. As a result, I was missing out on a number of features in the new version of the service because my defaults were already set one way. One of those features was comments. I just got all that working this morning. It was a bit more difficult than simply turning on the feature in Blogger since I had to integrate the various Blogger markup tags for this into my older, partially customized template. In any case, if you have been longing for a Finding Lisp comment feature, your desires are now fulfilled.
That said, I still love getting email feedback from people. I have met an amazing amount of really smart, fun people since I started writing this blog. Some are newbies to Lisp, like me. Others are old-timers that have worked with Lisp for decades, including at various Lisp machine companies like Symbolics. Please continue to drop me a note every so often. I like constructive criticism or even just a pat on the back.
Friday, July 16, 2004
Today, I found a reference to the original MIT AI Memo 528 which describes the CADR Lisp machine. It would be interesting to rebuild this today using an FPGA. Not that I have the time for such a project, but given current FPGA densities, it would seem to be relatively easy to use a PCI-based FPGA evaluation platform to (re)create a Lisp machine. That would be kinda fun.
Update: I have gotten a few comments from people pointing out that AI Memo 528 is a scanned document and the current PDF at MIT is missing one of the pages. If you're actually up for trying to recreate this machine or study it in great detail, you'll need to do a bit of reverse engineering, it seems, to recreate the missing information. A few more people pointed out that you're still going to be stuck there with a CADR and no software. Okay, but it was an interesting idea, no? ;-)
I read Brian Mastenbrook's recent blog entry about overspecifying fonts in CSS with amusement. I have been watching the form vs. content war surrounding HTML and web presentation for about a decade now. Remember when Netscape added the various presentation tags like CENTER and FONT to Navigator and everybody went crazy? The computer scientists hated it because anything other than serif text on a gray background seemed unnatural. The graphics arts crowd loved it because they could finally get a web page to look about like the way they designed it. It has been pandemonium ever since.
Anyway, then I read Markus Fix's Lispmeister entry from today and felt guilty. My Blogger.com-derived template had hardcoded a bunch of font entries. So, I caved to the mounting pressure and stripped it all out. The thing that pushed me over the top was that I noticed that most of the font information on the web is Windows specific and didn't do anything for anybody else anyway. Given that almost half the Finding Lisp readership is non-Windows-based, I decided the change was warranted. (Update: I just checked my web logs for this month and Windows is now the dominant platform for Finding Lisp readers--about 75% or so. Oh, well. Finding Lisp readership has been growing substantially each month and Windows users seem to represent the bulk of that. Interestingly, Mozilla usage has stayed flat at about 50% through that same timeframe.)
On a side note, Exploit Explorer looks like it is still being used by a few people. If the bad CSS bugs didn't get you before, maybe the recent hullabaloo over security issues will. I have been using Mozilla at work on Windows for about a year now and I have to say that it's great. While Explorer used to be the leading browser with Navigator falling behind, the same thing is now happening in reverse. Explorer is rapidly becoming an obsolete program. People say that Firefox is also good, but I have never tried it. At this point, anything non-Microsoft is good for me. It looks like people are generally getting this hint, too.
Code Generation Network has an interview with Greg Wilson that is sort of interesting. Generally speaking, I think Wilson has it wrong. He spends a lot of time talking about "extensibility" in both his paper and this interview but never really defines what that is or why it really matters. I think he basically means macros. He spends way too much time talking about XML, which is really irrelevant to the discussion.
A notable quote from the interview is
We've spent the last twenty-five years slowly moving ideas from languages like Smalltalk and Scheme into languages like C (now C++), Java, C#, and so on: objects, garbage collection, reflection... The biggest idea that _hasn't_ been brought over (so far) is extensibility, i.e. the ability to add new constructs to the language to tailor it for your problem domain. If you've worked with Scheme's macro system, you know just how much this can do for you.
As I discussed previously, my first reaction is, "If you want Scheme's macro system, then use Scheme." Rather than trying to bolt all the various Lisp features into an XML-based, blah, blah, blah, maybe we should spend a bit more time trying to simply get over the parenthesis and use Lisp.
Anyway, where Simonyi seems to be on to something in terms of the larger issues, Wilson seems to be suggesting the same sorts of things, but with a weaker voice as he bogs down with junk like XML.
Thursday, July 15, 2004
So after posting
Interestingly, I don't hear the word "refactoring" a lot in Lisp circles. Why?
yesterday in my blog entry about intentional programming, I was browsing Planet Lisp and saw Glen Ehrlich's blog entry about Tayssir John Gabbour posting some of Antonio Menezes Leitao's papers on the ALU site (how's that for a twisted chain of references, eh?). Of course, one of the papers was titled "A Formal Pattern Language for Refactoring of Lisp Programs." So, there you go. I guess people do talk about refactoring in Lisp circles, just not very often.
Tayssir also wrote me an email and said that Kiczales started a company, Intentional Software, with Simonyi, but has since left. The links are starting to become more clear now, I guess. It looks like at least some of the Intentional Programming ideas came from Lisp systems, with some extension to abstract away from Lisp specifically.
Wednesday, July 14, 2004
I found this interview with Charles Simonyi today. Simonyi, formerly at Microsoft, talks about Intentional Programming, something he has worked on for over a decade, since his days at Microsoft. Intentional Programming, says Simonyi, aims to move beyond conventional text-based programming languages using a system based on an abstract tree structure stored in a database. The idea is that you separate the "intention" of a program, from its expression as a text-based representation. An "intention" is an abstract conception of what the programmer intends for the program to do and can be either low level or high level. As near as I can tell, the best way to think about an intention is something akin to an abstract function, stripped of any particular programming language representation or algorithm. The idea is that many different representations (what Simonyi calls "projections") can be created for the abstract intentions. You could express them as a conventional programming language, or something more compact like a domain-specific programming language. For instance, given a database of intentions interconnected to represent a program, an editor could represent the program in something akin to C or Java. Further, code generations can pick different algorithms for the same intention based on where in the tree structure the intention sits. For instance, an intention might pick an algorithm based on parameter types.
This video describing Microsoft's intentional programming prototype called IP may help you understand the concepts a bit better.
It's interesting to note that intentional programming serves up essentially the same value proposition as described in my previous blog entry about Dr. Gregory V. Wilson's notion of an extensible programming language. My guess is that Wilson is grabbing ideas from intentional programming.
Now, these general ideas strike me as being very interesting, particularly in view of the capabilities of Lisp. I went through a bit of Wilson's stuff last time, so let's spend a bit of time examining Simonyi's Intentional Programming, using the Microsoft video as a bit of a guide for what he's up to at his new company.
- Firstly, Simonyi wants to move beyond text notation of a program to a tree structure representing the computations required for an algorithm. What's interesting is that Lisp has essentially done that. More than any other programming language, Lisp basically exposes the parse tree of the underlying algorithm. The main difference between Lisp and IP is that Lisp uses a text-based representation (sexprs) rather than a versioned database.
- A key point about intentional programming is that it allows the user to change the way a problem is represented. If a domain-specific language is more helpful for a program expression, the user is encouraged to use that, rather than a conventional notation. Since the tree-structured database is the master information repository and languages are simply notational conveniences for the sake of the programmer, there is no issue in changing representations when it is convenient. In the Lisp world, this is encouraged, too. As I have said quite a few times in this blog, Lisp nailed the macro system, and how! One of the things that most excites me about Lisp is the opportunity to create domain-specific languages when required. The difference is that with intentional programming, the database tree is the master and everything else is a projection. There can be multiple equivalent projections/representations of the same intentions. In the Lisp case, the source text is a master, with any DSL being transformed into the parse tree of sexprs using macros.
- At one point during the IP video, they talk about having bits of code that can walk the IP database tree and perform various transformations and optimizations, such as looking for redundant pieces of code that can be extracted into common functions. They call these transformation functions "enzymes" (horrible name, IMHO). Of course, Lisp has had code walkers for quite some time, and the same types of functionality could be written fairly easily for Lisp.
- The code generation scheme seems to be the biggest difference between the two systems. In the IP video, they make it sound as if code generation is very flexible and essentially pluggable. Programmers can create new types of intentions and add them to the system. Part of the definition of an intention is an expression for how to transform that intention into code. In the case of Lisp, this is typically a more conventional system, with a compiler performing basic transformations and emitting code for primitives. One could easily conceive of a metaobject protocol for compilers, however, that would mirror what IP has done. Indeed, it looks like Kiczales and crew at Xerox PARC did some work in that area some time ago.
- One thing that intentional programming seems to make very easy is refactoring. Most of what we think of as refactoring in a language like Java simply becomes transformations on the tree representation, easily handled as such, rather than the parsing task that it is today with languages like Java. If you think about most of the really good refactoring tools like Eclipse or IDEA, they operate by fully parsing the source code and then operating on a parse tree representation stored in the background. Interestingly, I don't hear the word "refactoring" a lot in Lisp circles. Why?
So, it seems to me that intentional programming is a pretty interesting field of study. Certainly, we need to find a way to express problems at higher levels of abstraction in order to combat the explosive complexity that accompanies big problems. Moving beyond simple imperative programming style and using DSLs heavily seems like a good idea. The ideas associated with intentional programming seem like they're on the right track, if for no other reason than Lisp has already partially validated at least some of them. So, a few questions...
- Is intentional programming really that different than just programming in Lisp to begin with?
- Is the total abstraction of computation into text-less "intentions" with text-based projections the right way to go, or does simply moving to sexprs do the job?
- Is the database really buying you anything from a computation/algorithm/expression standpoint, or is it just source code control in another form?
- How close to an amazing intentional programming system could you get by creating a great IDE in Lisp? Of the various Lisp programming environments, the best ones I have heard about seemed to run on the Lisp Machines, but I don't know that they had the same characteristics as an intentional programming system like IP. All I know about Lisp machines has been gleaned by watching some of the screen-cam videos floating around the net; I have never worked on one directly and don't know enough about the programming environment capabilities.
In short, intentional programming has a lot of neat ideas. The question is whether they are sufficiently different than stuff already done in Lisp to warrant that "wizzy, cool" factor associated with "new technology," or are they just Lisp, recycled yet again into the mainstream with a better marketing campaign?
Monday, July 05, 2004
I had an interesting experience this weekend. As I have stated before, I have been consuming Lisp-related books at a heady clip lately. Well, my bookshelves are full, as so often is the case in my house. As with my programming transitions of the past (C++ to Java, for instance), I actually started combing through my Java books, looking for a victim to make space, which I found. This is telling. I think I'm committed. Lisp is good.
After reading Glenn Ehrlich's Cooking with Lisp blog the other day, I sat and watched a fascinating video with Alan Kay, one of the old timers back at Xerox PARC. Glenn's posting was about Kay's Croquet shared collaboration environment. This interested me a little, but the highlight of the posting was a video lecture that Kay presented at Stanford. The video starts with Kay describing and demonstrating Croquet. That part was all well and good, but I wasn't overly impressed with the interaction model. After reading a lot of William Gibson novels, I understand the goal, but it just doesn't do it for me. Moving avatars through virtual 3D worlds just doesn't seem all that neat for anything other than a game-world. Most of the time it just seems like a hassle.
Okay, but after that, Kay starts to take questions at the lecture. This was the best part, in my opinion. The questions are very interesting, as are Kay's responses. Simply, Kay has some pretty harsh criticism of computer science education these days. He says that nobody is doing computer science anymore and basically equates today's computer science curriculum with vocational work, simply training legions of Java programmers and not studying any hard problems or advancing the state of the art. He says that he actually looked at writing Croquet in Java originally, but found it sadly lacking on a number of fronts and so they went back to Smalltalk (Squeak). He has high praise for Lisp and McCarthy, saying that it was one of the most impactful ideas ever in computer science. At one point, he blasts Stanford's Bill Gates-funded computer science building, saying it's an oxymoron.
In short, even if you aren't interested in Croquet, this video is fascinating and will expose you to the mind of Alan Kay, a genuinely smart person. Whether or not you agree with him, well, that's another matter.