Books of Note

Practical Common
LispThe best intro to start your journey. Excellent coverage of CLOS.

ANSI Common
LispAnother great starting point with a different focus.

Paradigms of Artificial Intelligence
ProgrammingA superb set of Lisp examples. Not just for the AI crowd.

Tuesday, June 28, 2005

ILC 2005, Wednesday report (late) 

Okay, sorry for the wait on this. I got swamped with all those things in my life that have generally made blogging so hard the past couple of months (family and work, notably). Here's the scoop on my Wednesday at ILC 2005.

The day started with Will Clinger's presentation about Common Larceny, a full R5RS Scheme system that targets the Microsoft CLR. Will described some of the challenges faced in adapting Scheme to run on the CLR. Generally speaking, everything works pretty well. The Scheme side of the house can interact with C# and other CLR-based languages. There are quite a few performance compromises in the system, basically dictated by limitations in the CLR that force many data items to be boxed when a native implementation would use an optimized unboxed representation. Floating point performance was notably low. In spite of all the hoops that were jumped through, Common Larceny runs about as fast as MzScheme (slightly faster on some benchmarks and slightly slower on some others). There is further performance tuning to be done, however, and Microsoft itself is tuning the CLR, too.

After hearing about the challenges of getting Scheme operating in the CLR, there were three breakout sessions describing various Lisp implementations targeting the Java virtual machine. I skipped all of those. Lisp-in-Java simply isn't my interest right now, although there are several cool implementations that are developing quite well.

Instead, I attended what turned out to be a bit of a performance track. First, John Amuedo delivered Optimizing Numerical Performance with Symbolic Computation. John described his work with Lisp performing various signal processing algorithms like FFTs and the techniques he has used to optimize performance. Again, this sort of computation has great industrial utility. In particular, John's company is located in Los Angeles and does work for the various media companies there. John demonstrated some of the signal processing done on old movie soundtracks to restore them to their original sound quality, beating back the ravages of time. John had examples of some of the optimization techniques he had used in writing his Lisp routines, including macro transformations to generate more highly-optimized FFT code. In the end, though, John was pretty forceful in saying that Lisp compilers need to get better in producing good numerical code. If they do, many of the other advantages of Lisp can be useful to those interested in numerical processing. Several other attendees in the room at the time echoed similar sentiments and a few people agreed to get together to try to urge the Lisp compiler developers help them out.

Next up was Roger Dannenberg's Functional Programming for Signal Processing. Dannenberg has written an XLisp-based digital audio synthesis and processing system called Nyquist. Basically, Nyquist uses an s-expression-based DSL to describe various audio processing components. This DSL is processed to produce optimized C code, which can then be linked into XLisp and provides high-speed digital audio synthesis. I liked Roger's talk. Roger had a great presentation style and explained his system in quite a bit of detail without either getting too deep in the weeds or glossing over it too much.

Next was Lynn Quam's Performance Beyond Expectations. Recently retired, Lynn had been working at SRI for a long time, and specifically with Lisp-based image-processing applications. The main system has gone through several evolutions over the past couple of decades, first running on Symbolics hardware, then Sun/SGI machines running Lucid CL, then finally on standard platforms running CMUCL and Allegro. The images Lynn deals with are sometimes massive (1 GB or more). They often cannot be kept in memory all at once. Because the image sizes are so large, the images are often not stored in a simple linear, left to right, top to bottom pixel order. Instead, they may be broken into smaller tiles, and access L-to-R, T-to-B, within those smaller tiles. In certain cases, different tiles will be paged from disk into main memory as they are required by the algorithms. Lynn demonstrated several methods of polymorphic access to pixel data. The polymorphism is required to keep the algorithms blissfully unaware of the pixel data arrangements. Lynn started with a non-polymorphic aref method to get some idea of the limits of the system. He then demonstrated the drop caused by using CLOS and generic function dispatch. He then developed several other methods to keep the polymorphic access that he wanted, while at the same time approaching the speed of standard aref. Again, Lynn did a great job, showing snippets of code and walking the audience through what was different about each one and then showing the resulting effect on the benchmark timings.

The only question I had after watching Lynn's talk was why macros weren't used more often to generate more optimal versions of some routines based on various image types. I would have thought a technique like that would have sped things up considerably. Lynn has been working on this particular problem domain for decades, though, so I figured that must have been considered at some point through the years.

Lunch was a quick trip to the Stanford student union for some Pad Thai noodle soup. That's about all my body could handle at that point. The day was hot. I managed to have Peter Seibel sign my copy of Practical Common Lisp. Good fun.

After lunch, I attended Jans Aasman's AllegroCache: A High-Performance Object Database for Large Complex Problems. The guys at Franz have built a fairly nice object-oriented persistence layer. It seems to be fairly easy to use and is pretty fast, particularly for random queries, versus standard RDBMSs. Jans was very clear that they are not positioning this versus Oracle or other RDBMSs for those problems where an RDBMS will work. AllegroCache is for those problems where you really want an OODBMS and that hassle of an object-relational-mapping layer is either too complex or doesn't meet your performance goals. I won't say much more other than it's still in beta right now. They're shooting for a production release sometime this fall.

After this was Rusty Johnson's Rapid Data Prototyping: Crafting Directionless Data Into Useful Information. This talk was a bit frustrating. I loved the title and had high expectations for the content. It sounded really cool. It fell flat. The main problem was that Rusty works for Northrop Grumman which does a lot of work for the government. I got the feeling that for security reasons, Rusty really couldn't provide much detail about what he actually does. Therefore, the talk was very big on generalities and didn't give any example. Rusty described having a cool system to "manipulate data" (which was never really defined) to answer "hard questions" (an example of which was never given) for various "decision makers" (again, no examples given). I talked to another attendee afterwards and he said, "So, they do 'cool stuff' with 'cool technology', but that's all I can say about it." Another tip for presenters: if you work on technology that you can't, for one reason or another, talk about, then it doesn't make sense to submit a conference paper where you then go and avoid talking about it in anything but generalities. Again, I think there is probably some neat stuff hidden here, but Rusty's talk did nothing to describe it.

Next up was Brian Mastenbrook's Syntax Analysis in the Climacs Text Editor Climacs is an Emacs-like editor written in Common Lisp and using McCLIM for display. Brian used a package called "Slidemacs" to deliver his presentation using Climacs and McCLIM. Brian discussed the syntax analysis features the team is building in Climacs. The main point here is that these analysis features are based on an incremental parser methodology that creates a full parse tree of the program/document under discussion. This then allows for all sorts of analysis and display possibilities, far beyond what GNU Emacs does with various programming modes and font-lock mode for syntax coloring. In short, it makes any syntax analysis fully aware of some of the underlying program semantics. Thus, as new macros are defined, for instance, they can be colored in a certain way. Brian specifically demonstrated a Prolog example where a new operator was defined and how Climacs would correctly highlight the program text dynamically as the operator was defined or commented out. Brian also demonstrated some of the limitations of GNU Emacs Lisp mode syntax highlighting and how it sometimes gets confused by more advanced Common Lisp structures like block comments or complex symbol names with escaped characters. I was impressed both with the capabilities of Climacs and with McCLIM generally. Brian was running the whole presentation on a Mac laptop and things were snappy the whole time.

As with Tuesday, the last three sessions of the day were plenary sessions with the whole group of attendees present.

The first plenary was Patrick Dussud's Re-inventing Lisp for Ubiquity. Patrick was one of the key guys at TI on the Explorer project and now works for Microsoft as the lead architect for the CLR virtual machine technology for .Net. The main premise for Patrick's talk was that Lisp is a great language but that today it exists in its own little self-contained world. Basically, Lisp doesn't play very with others. No, in Patrick's mind, FFIs are not the same thing as playing nicely with others. There is still the issue of the differences in standard data types and dynamic typing. Patrick described his dream for a new version of Lisp that would be statically typed and use more "standard" type representations. Thus, no tagging. This Lisp would be a "peer" in standard systems by way of being able to develop modules in this new Lisp and link them, dynamically or otherwise, with modules written in any other language. If you're into Microsoft's .Net, then that would mean a new Lisp that would interact properly with other software components and development tools such as debuggers and object browsers for the CLR.

If you're saying to yourself, "But that's not Common Lisp," then you're right. It's not. Patrick's suggestion is to try to keep some of the things which Lisp does better than any other language, things such as code-is-data and closures, but eliminate those things that keep Lisp from integrating well with other languages.

Patrick generally got a negative reaction from the room. Most people were heard muttering things like, "But I like dynamic typing..." I may have even heard something like, "He's a witch! Burn him!"

My own feeling is that there's nothing wrong with Patrick's suggestion. Yes, it isn't Common Lisp. But it's still a Lisp and there is nothing wrong with developing a new version of Lisp to suit some different requirements. Others may not like the compromises involved, but my own feeling was that such a Lisp would be interesting and would certainly have its uses. Right now, Lisp generally doesn't play well with others, just like Patrick said. Whether that's a problem or not for you depends on your point of view.

In the end, Patrick's talk generated a lot of comments and questions from the audience. There were times when it seemed he might not make it out of the room alive, and other times when things were a lot less heated.

Following this, Henry Baker, of garbage collection fame, delivered The Legacy of Lisp -- "Observations/Rants". I'll say right up front that I really enjoyed Henry's talk. It was one of the best of the conference that I saw. The talk was a pretty thorough critique of where Lisp is at versus where it could be. You don't have a rosy-glasses Lisp-lover in Baker.

While Baker clearly loves Lisp, he was very forthright in criticizing the language and the community on a number of points. Baker started off asking some very difficult questions: "If Lisp is so great, how come its development environments haven't advanced in decades?" (my paraphrase). Baker said that his own development style is one that proceeds from a rough prototype through to a finished product. One of the nice things about Lisp is that it supports this style all the way through; there is no need for a separate prototyping language followed by a "real language" of some sort. However, Lisp editors still leave a lot to be desired. Baker argued for a return to the structured editors similar to those that existed back with Interlisp at PARC. Baker argued that we should have editors capable of massive restructuring of programs in a coherent fashion. In particular, it's important to be able to rename things on a massive scale. This is important when you're moving from a prototype to a final program as you start correcting the semantic errors in the initial names you gave to various objects. Second, this is important when you're working on some piece of code that you didn't actually write and you want to start annotating things and morphing them to fit your view of the problem, rather than the way the original author named them. Baker claimed that no language to date had delivered what he was asking for. I actually had to differ here with Baker on that point. It sounded like what he was wanting was what the current software community calls refactoring and some of this does exist in the Java world with the excellent IDEs Eclipse and IDEA. In fact, some of the alpha renaming and beta reduction transformations that exist in Lambda Calculus, which Baker was asking to be embedded into editors, is present in these other IDEs under different names.

Baker argued that efficiency has always been a Lisp problem and that the general Lisp community has never been interested in efficiency, and in some cases is almost hostile to any suggestion that things be made more efficient. He said that it's simply too easy to write slow programs in Lisp and that most of the programming books that teach Lisp do so in a way that naturally leads new Lisp programmers to write slow Lisp code. Further, there are too many bad algorithms still embedded in various Lisp library functions that need to be cleaned out and made efficient. Finally, a big problem with Lisp is that there are very few Lisp profiling tools. When you actually find that your Lisp program is slow, it's very difficult to determine why without a lot of trial and error.

interestingly, Baker argued for type checking to be introduced into Lisp. Baker said that many Lisp people have a natural aversion to type checking, but that it's obvious that some amount of static type checking would be good; while it would not catch every error, it would catch some and that's a good thing. It was interesting to compare this position with Patrick Dussud's virtually identical position earlier. Hmmm....

Finally, Baker suggested that Lisp has currently turned into a static language and that static languages are doomed to death (like Latin, as he put it). He said that Common Lisp has actually been a stumbling block for further Lisp evolution as it has halted almost all innovation within the Lisp community. That felt like a breath of fresh air to me. I have been thinking the same thing for a while. Common Lisp needs an update. There are vast portions of standard libraries that need to be designed and delivered in the Common Lisp community that just aren't there yet. And so you have many programmers choosing to use Python or Ruby because good libraries exist there for solving modern problems.

Baker then argued that Lisp needs better tools for bit-hacking. He said that bit-hacking is not just a low-level, operating system issue. It's something that you encounter all the time with various application protocols. Take compression, for example, in the form of Gzip, MPEG, or JPEG. These are ubiquitous protocols on which Lisp has no particular advantage, and probably has a big disadvantage. As somebody who deals with networking algorithms and protocols all the time, I had to agree with him here. Lisp does need better standard bit-hacking tools. This could be done in a standard library with special compiler support to open-code various bit-bashing primitives into compiler output.

Baker argued for real-time extensions to Lisp that would allow fine-grained control of scheduling. Baker argued for a good, integrated, persistent database (and even referenced Allegro Cache as perhaps providing this).

About this time, Baker was ~40 slides into a 56 slide presentation, which was supposed to fit in a ~40-minute timeslot. He had already gone over time budget. He also had John McCarthy sitting in the front row, waiting to follow him. John and Henry were exchanging comments on various aspects of Lisp history (Henry telling John at one point that dynamic variables were really broken). In short, there was no way this presentation was going to get finished the way Henry had thought it would. While there was simply too much information for the timeslot, Henry did a great job and pulled no punches when it came to being critical of Lisps failures. Henry obviously loves Lisp and has great respect for its accomplishments, but he's also very realistic. I found this particularly refreshing. Sometimes Lisper wax so poetic about the language that they can't see it's flaws. Henry did not fit that mold.

Whew! Still with me? One more session to go...

Last up was John McCarthy. Well, how could I miss this. Markus was sitting right in front of John with his John McCarthy t-shirt on. At this point in the day, I was pretty exhausted. Being sick and not getting as much sleep the night before, I found myself actually dosing a bit during John's talk.

John discussed some pretty abstract ideas about the syntax of computer languages, saying that instead of a single syntax, like we always think about, there are actually more than one. He described an input syntax, an output syntax, and a computational syntax.

John described his new language, Elephant, that he's been working on for a while. Elephant was designed to help John in some of his AI research and has yet to be implemented.

One of the most interesting things that John said was that he has really been out of the Lisp community for quite sometime, instead devoting his time to the artificial intelligence research that first caused him to invent Lisp.

Interestingly, John said that he, like Baker, thinks that Common Lisp has ended up holding Lisp back. By defining an unchangeable standard for Lisp, well, it hasn't changed. And John definitely saw that as a bad thing. Several members of the audience argued with him a bit on that point, but he held his ground. At one point, he said the quote paraphrased in Markus's blog:

If someone was to drop a bomb on this building, it would wipe out 50 percent of the Lisp community. That would probably be a good thing. It would allow Lisp to start over.

That wasn't exactly what John said, but it was pretty close. Again, the main thing here was somebody with impeccable Lisp credentials saying that Lisp was far from perfect and in fact needed to do better than Common Lisp. The fact that Patrick, Henry, and John all argued essentially the same thing was a wake-up call, I think.

With that, the session ended and I headed home. There was an ALU meeting that was supposed to take place afterwards, but I was so tired that I didn't end up staying.

Hopefully, I got most of that reported correctly. Again, I apologize for the delay in getting this written.


Comments:


Thanks for a great writeup, Dave!
 


This is great, thanks so much for writing it up.
 


If you want a version of Lisp that is both numerically efficient and practical, have a look at LUSH. Arguably, Python is also getting pretty close to a Lisp replacement, including native compilers.

As for the people at the ILC, I don't believe they have anything of consequence to say or contribute anymore. They had their chance 20 years ago and all they gave us was CommonLisp, commercial failures, and a bad reputation.
 


After spending 10 years developing with CL and Symbolics systems, etc, (over 12 years ago) I found myself joining the 'real world' (the world of IT interoperability and common platforms) that has largely left the LISP community behind, grousing among itself (like the Smalltalk community) that it's 'better'. I've been yearning for the 'return' of LISP; however, that will require coexistance of LISP with other languages and systems. Although it's not a true dynamic language environment, the CLR/CLI/IL platform has many of the characteristics that would lend itself to a flavor (pun intended) of LISP that isn't true Common Lisp, but a LISP language implementation that could actually grow in usage. In fact, I'd say that the CLR is the only place that LISP can reemerge (not JVM.) As Dogbert says "Change happens, get over it." The LISP community needs to sacrifice some of the purity of LISP, otherwise they're just going to sacrifice LISP. (From another discussion on this topic, here's a good quote from Dan Weinreb, one of the Symbolics gurus, now at BEA.)
 


Great write-up Dave! So what happens next (your idea of it)???

What I mean is that most everywhere I turn these days the idea keeps cropping up that, yes, Lisp is 'great' but it's missing (Take your pick), or needs serious improvement/extension in 'X', or, Common Lisp has gotten the community frozen like a bunny caught with halogen headlights etc.

Yours, Bakers, McCarthy's & other comments make it plain that Lisp is seriously overdue for a next generation jump. Having an obviously very good feel for the community and the technology, what are your practical ideas on HOW that can practically happen? And what can "we" out here in readerland / Lisp programmer land do to catalyse, help, support, and just plain simply push (x?) to help make that happen??

I'm getting a mix of eager, tired, and fed up both hearing and realising the language is great AND as Henry Baker so eloquently described it - all the things that need to be generally fixed/ modified /added /extended etc. Okay, Enough Now! How Do "We" support an effort to get it fixed and what kind of effort ought it to be??

Can you please write about this in your blog?!
 


Dynamic typing is right up there with macros, closures, etc as one of those lisp features that make the language truly more powerful, expressive, and abstract than its statically typed competition. I, for one, am not keen to leave dynamic typing behind. If that were to happen, I would have to add lengthy instanceof blocks to key parts of my favorite code, rather than letting lisp work its magic. Boo hiss!

I am willing to bet that the number of people using lisp today is much higher than what you ILC attendees think. You can invent new implementations of lisp all you want, but please don't mess up the established ones that we, the horde of no-name users, love so much. If anything should change in Common Lisp, it should be things that would make it more lispy, not less. For example, I would like to be able to obtain the SYMBOL-VALUE of a lexical binding. Let the disgruntled move to your CLR implementation, but let the happy stay that way.

If better refactoring tools for lisp are desired, let's write tham as elisp programs. I use XRefactory for java, which is the best refactoring tool I have ever used. Writing something comparable for lisp should be easier, since lisp's syntax is vastly less complex. I am interested to see if Syntax Analysis in Climacs will include refactoring functionality.

Implementing a .NET version of lisp is not "interoperability". Rather, it is "inner-operability". It is stuffing a language into a vendor's virtual machine. If you write a shared object in one language, compile it, and use that shared object in a program written in a completely different language, that's interoperability. CLR bytecode is a language; C#, VB.NET, etc are different intermediary formats for producing the bytecode. In the same way, machine code is a real language, and things like C, lisp, etc are human-readable constructs that a compiler turns into that language. Can you execute raw C, C#, or lisp? No, it must be interpreted or compiled into something that is executable. If interoperability between lisp's machine code and C's machine code is desired (so that shared objects could be used interchangeably, for example), one would have to make the respective compilers create compatible ELF files.

I'd like to hear more on Richard Stallman's original vision of GNU/Linux, where he imagined C and lisp being equal partners. If lisp machine code were compatible with C machine code, could we not (as an example) write parts of the linux kernel in lisp, while writing other parts in C?

I guess what I am getting at is: if we are looking for a common currency for interoperability, it should be machine code, not CLR bytecode, or JVM bytecode, or any other bytecode, because anything other than machine code is not really executable, therefore it cannot serve as a lowest common denominator.
 

Post a Comment


Links to this post:

Create a Link

This page is powered by Blogger. Isn't yours?