Books of Note

Practical Common
LispThe best intro to start your journey. Excellent coverage of CLOS.

ANSI Common
LispAnother great starting point with a different focus.

Paradigms of Artificial Intelligence
ProgrammingA superb set of Lisp examples. Not just for the AI crowd.

Tuesday, June 28, 2005

Moleskines and permanent legacies 

In his blog a few weeks ago, Markus Fix wrote about leaving a legacy. I think it's definitely true. In fact, I think I share his fondness for fine paper and fountain pens. I do far too much with a computer these days and I know that I'm not leaving as much legacy as I could be. There's something odd about that, having all this technology, but all of it so impermanent.

My own preference is for Moleskines, which I discovered at 43folders, and a Waterman fountain pen from Levenger, which I have owned for almost a decade now. I noticed that Peter Seibel seemed to be carrying a small Moleskine pocket notebook at ILC 2005.


ILC 2005, Wednesday report (late) 

Okay, sorry for the wait on this. I got swamped with all those things in my life that have generally made blogging so hard the past couple of months (family and work, notably). Here's the scoop on my Wednesday at ILC 2005.

The day started with Will Clinger's presentation about Common Larceny, a full R5RS Scheme system that targets the Microsoft CLR. Will described some of the challenges faced in adapting Scheme to run on the CLR. Generally speaking, everything works pretty well. The Scheme side of the house can interact with C# and other CLR-based languages. There are quite a few performance compromises in the system, basically dictated by limitations in the CLR that force many data items to be boxed when a native implementation would use an optimized unboxed representation. Floating point performance was notably low. In spite of all the hoops that were jumped through, Common Larceny runs about as fast as MzScheme (slightly faster on some benchmarks and slightly slower on some others). There is further performance tuning to be done, however, and Microsoft itself is tuning the CLR, too.

After hearing about the challenges of getting Scheme operating in the CLR, there were three breakout sessions describing various Lisp implementations targeting the Java virtual machine. I skipped all of those. Lisp-in-Java simply isn't my interest right now, although there are several cool implementations that are developing quite well.

Instead, I attended what turned out to be a bit of a performance track. First, John Amuedo delivered Optimizing Numerical Performance with Symbolic Computation. John described his work with Lisp performing various signal processing algorithms like FFTs and the techniques he has used to optimize performance. Again, this sort of computation has great industrial utility. In particular, John's company is located in Los Angeles and does work for the various media companies there. John demonstrated some of the signal processing done on old movie soundtracks to restore them to their original sound quality, beating back the ravages of time. John had examples of some of the optimization techniques he had used in writing his Lisp routines, including macro transformations to generate more highly-optimized FFT code. In the end, though, John was pretty forceful in saying that Lisp compilers need to get better in producing good numerical code. If they do, many of the other advantages of Lisp can be useful to those interested in numerical processing. Several other attendees in the room at the time echoed similar sentiments and a few people agreed to get together to try to urge the Lisp compiler developers help them out.

Next up was Roger Dannenberg's Functional Programming for Signal Processing. Dannenberg has written an XLisp-based digital audio synthesis and processing system called Nyquist. Basically, Nyquist uses an s-expression-based DSL to describe various audio processing components. This DSL is processed to produce optimized C code, which can then be linked into XLisp and provides high-speed digital audio synthesis. I liked Roger's talk. Roger had a great presentation style and explained his system in quite a bit of detail without either getting too deep in the weeds or glossing over it too much.

Next was Lynn Quam's Performance Beyond Expectations. Recently retired, Lynn had been working at SRI for a long time, and specifically with Lisp-based image-processing applications. The main system has gone through several evolutions over the past couple of decades, first running on Symbolics hardware, then Sun/SGI machines running Lucid CL, then finally on standard platforms running CMUCL and Allegro. The images Lynn deals with are sometimes massive (1 GB or more). They often cannot be kept in memory all at once. Because the image sizes are so large, the images are often not stored in a simple linear, left to right, top to bottom pixel order. Instead, they may be broken into smaller tiles, and access L-to-R, T-to-B, within those smaller tiles. In certain cases, different tiles will be paged from disk into main memory as they are required by the algorithms. Lynn demonstrated several methods of polymorphic access to pixel data. The polymorphism is required to keep the algorithms blissfully unaware of the pixel data arrangements. Lynn started with a non-polymorphic aref method to get some idea of the limits of the system. He then demonstrated the drop caused by using CLOS and generic function dispatch. He then developed several other methods to keep the polymorphic access that he wanted, while at the same time approaching the speed of standard aref. Again, Lynn did a great job, showing snippets of code and walking the audience through what was different about each one and then showing the resulting effect on the benchmark timings.

The only question I had after watching Lynn's talk was why macros weren't used more often to generate more optimal versions of some routines based on various image types. I would have thought a technique like that would have sped things up considerably. Lynn has been working on this particular problem domain for decades, though, so I figured that must have been considered at some point through the years.

Lunch was a quick trip to the Stanford student union for some Pad Thai noodle soup. That's about all my body could handle at that point. The day was hot. I managed to have Peter Seibel sign my copy of Practical Common Lisp. Good fun.

After lunch, I attended Jans Aasman's AllegroCache: A High-Performance Object Database for Large Complex Problems. The guys at Franz have built a fairly nice object-oriented persistence layer. It seems to be fairly easy to use and is pretty fast, particularly for random queries, versus standard RDBMSs. Jans was very clear that they are not positioning this versus Oracle or other RDBMSs for those problems where an RDBMS will work. AllegroCache is for those problems where you really want an OODBMS and that hassle of an object-relational-mapping layer is either too complex or doesn't meet your performance goals. I won't say much more other than it's still in beta right now. They're shooting for a production release sometime this fall.

After this was Rusty Johnson's Rapid Data Prototyping: Crafting Directionless Data Into Useful Information. This talk was a bit frustrating. I loved the title and had high expectations for the content. It sounded really cool. It fell flat. The main problem was that Rusty works for Northrop Grumman which does a lot of work for the government. I got the feeling that for security reasons, Rusty really couldn't provide much detail about what he actually does. Therefore, the talk was very big on generalities and didn't give any example. Rusty described having a cool system to "manipulate data" (which was never really defined) to answer "hard questions" (an example of which was never given) for various "decision makers" (again, no examples given). I talked to another attendee afterwards and he said, "So, they do 'cool stuff' with 'cool technology', but that's all I can say about it." Another tip for presenters: if you work on technology that you can't, for one reason or another, talk about, then it doesn't make sense to submit a conference paper where you then go and avoid talking about it in anything but generalities. Again, I think there is probably some neat stuff hidden here, but Rusty's talk did nothing to describe it.

Next up was Brian Mastenbrook's Syntax Analysis in the Climacs Text Editor Climacs is an Emacs-like editor written in Common Lisp and using McCLIM for display. Brian used a package called "Slidemacs" to deliver his presentation using Climacs and McCLIM. Brian discussed the syntax analysis features the team is building in Climacs. The main point here is that these analysis features are based on an incremental parser methodology that creates a full parse tree of the program/document under discussion. This then allows for all sorts of analysis and display possibilities, far beyond what GNU Emacs does with various programming modes and font-lock mode for syntax coloring. In short, it makes any syntax analysis fully aware of some of the underlying program semantics. Thus, as new macros are defined, for instance, they can be colored in a certain way. Brian specifically demonstrated a Prolog example where a new operator was defined and how Climacs would correctly highlight the program text dynamically as the operator was defined or commented out. Brian also demonstrated some of the limitations of GNU Emacs Lisp mode syntax highlighting and how it sometimes gets confused by more advanced Common Lisp structures like block comments or complex symbol names with escaped characters. I was impressed both with the capabilities of Climacs and with McCLIM generally. Brian was running the whole presentation on a Mac laptop and things were snappy the whole time.

As with Tuesday, the last three sessions of the day were plenary sessions with the whole group of attendees present.

The first plenary was Patrick Dussud's Re-inventing Lisp for Ubiquity. Patrick was one of the key guys at TI on the Explorer project and now works for Microsoft as the lead architect for the CLR virtual machine technology for .Net. The main premise for Patrick's talk was that Lisp is a great language but that today it exists in its own little self-contained world. Basically, Lisp doesn't play very with others. No, in Patrick's mind, FFIs are not the same thing as playing nicely with others. There is still the issue of the differences in standard data types and dynamic typing. Patrick described his dream for a new version of Lisp that would be statically typed and use more "standard" type representations. Thus, no tagging. This Lisp would be a "peer" in standard systems by way of being able to develop modules in this new Lisp and link them, dynamically or otherwise, with modules written in any other language. If you're into Microsoft's .Net, then that would mean a new Lisp that would interact properly with other software components and development tools such as debuggers and object browsers for the CLR.

If you're saying to yourself, "But that's not Common Lisp," then you're right. It's not. Patrick's suggestion is to try to keep some of the things which Lisp does better than any other language, things such as code-is-data and closures, but eliminate those things that keep Lisp from integrating well with other languages.

Patrick generally got a negative reaction from the room. Most people were heard muttering things like, "But I like dynamic typing..." I may have even heard something like, "He's a witch! Burn him!"

My own feeling is that there's nothing wrong with Patrick's suggestion. Yes, it isn't Common Lisp. But it's still a Lisp and there is nothing wrong with developing a new version of Lisp to suit some different requirements. Others may not like the compromises involved, but my own feeling was that such a Lisp would be interesting and would certainly have its uses. Right now, Lisp generally doesn't play well with others, just like Patrick said. Whether that's a problem or not for you depends on your point of view.

In the end, Patrick's talk generated a lot of comments and questions from the audience. There were times when it seemed he might not make it out of the room alive, and other times when things were a lot less heated.

Following this, Henry Baker, of garbage collection fame, delivered The Legacy of Lisp -- "Observations/Rants". I'll say right up front that I really enjoyed Henry's talk. It was one of the best of the conference that I saw. The talk was a pretty thorough critique of where Lisp is at versus where it could be. You don't have a rosy-glasses Lisp-lover in Baker.

While Baker clearly loves Lisp, he was very forthright in criticizing the language and the community on a number of points. Baker started off asking some very difficult questions: "If Lisp is so great, how come its development environments haven't advanced in decades?" (my paraphrase). Baker said that his own development style is one that proceeds from a rough prototype through to a finished product. One of the nice things about Lisp is that it supports this style all the way through; there is no need for a separate prototyping language followed by a "real language" of some sort. However, Lisp editors still leave a lot to be desired. Baker argued for a return to the structured editors similar to those that existed back with Interlisp at PARC. Baker argued that we should have editors capable of massive restructuring of programs in a coherent fashion. In particular, it's important to be able to rename things on a massive scale. This is important when you're moving from a prototype to a final program as you start correcting the semantic errors in the initial names you gave to various objects. Second, this is important when you're working on some piece of code that you didn't actually write and you want to start annotating things and morphing them to fit your view of the problem, rather than the way the original author named them. Baker claimed that no language to date had delivered what he was asking for. I actually had to differ here with Baker on that point. It sounded like what he was wanting was what the current software community calls refactoring and some of this does exist in the Java world with the excellent IDEs Eclipse and IDEA. In fact, some of the alpha renaming and beta reduction transformations that exist in Lambda Calculus, which Baker was asking to be embedded into editors, is present in these other IDEs under different names.

Baker argued that efficiency has always been a Lisp problem and that the general Lisp community has never been interested in efficiency, and in some cases is almost hostile to any suggestion that things be made more efficient. He said that it's simply too easy to write slow programs in Lisp and that most of the programming books that teach Lisp do so in a way that naturally leads new Lisp programmers to write slow Lisp code. Further, there are too many bad algorithms still embedded in various Lisp library functions that need to be cleaned out and made efficient. Finally, a big problem with Lisp is that there are very few Lisp profiling tools. When you actually find that your Lisp program is slow, it's very difficult to determine why without a lot of trial and error.

interestingly, Baker argued for type checking to be introduced into Lisp. Baker said that many Lisp people have a natural aversion to type checking, but that it's obvious that some amount of static type checking would be good; while it would not catch every error, it would catch some and that's a good thing. It was interesting to compare this position with Patrick Dussud's virtually identical position earlier. Hmmm....

Finally, Baker suggested that Lisp has currently turned into a static language and that static languages are doomed to death (like Latin, as he put it). He said that Common Lisp has actually been a stumbling block for further Lisp evolution as it has halted almost all innovation within the Lisp community. That felt like a breath of fresh air to me. I have been thinking the same thing for a while. Common Lisp needs an update. There are vast portions of standard libraries that need to be designed and delivered in the Common Lisp community that just aren't there yet. And so you have many programmers choosing to use Python or Ruby because good libraries exist there for solving modern problems.

Baker then argued that Lisp needs better tools for bit-hacking. He said that bit-hacking is not just a low-level, operating system issue. It's something that you encounter all the time with various application protocols. Take compression, for example, in the form of Gzip, MPEG, or JPEG. These are ubiquitous protocols on which Lisp has no particular advantage, and probably has a big disadvantage. As somebody who deals with networking algorithms and protocols all the time, I had to agree with him here. Lisp does need better standard bit-hacking tools. This could be done in a standard library with special compiler support to open-code various bit-bashing primitives into compiler output.

Baker argued for real-time extensions to Lisp that would allow fine-grained control of scheduling. Baker argued for a good, integrated, persistent database (and even referenced Allegro Cache as perhaps providing this).

About this time, Baker was ~40 slides into a 56 slide presentation, which was supposed to fit in a ~40-minute timeslot. He had already gone over time budget. He also had John McCarthy sitting in the front row, waiting to follow him. John and Henry were exchanging comments on various aspects of Lisp history (Henry telling John at one point that dynamic variables were really broken). In short, there was no way this presentation was going to get finished the way Henry had thought it would. While there was simply too much information for the timeslot, Henry did a great job and pulled no punches when it came to being critical of Lisps failures. Henry obviously loves Lisp and has great respect for its accomplishments, but he's also very realistic. I found this particularly refreshing. Sometimes Lisper wax so poetic about the language that they can't see it's flaws. Henry did not fit that mold.

Whew! Still with me? One more session to go...

Last up was John McCarthy. Well, how could I miss this. Markus was sitting right in front of John with his John McCarthy t-shirt on. At this point in the day, I was pretty exhausted. Being sick and not getting as much sleep the night before, I found myself actually dosing a bit during John's talk.

John discussed some pretty abstract ideas about the syntax of computer languages, saying that instead of a single syntax, like we always think about, there are actually more than one. He described an input syntax, an output syntax, and a computational syntax.

John described his new language, Elephant, that he's been working on for a while. Elephant was designed to help John in some of his AI research and has yet to be implemented.

One of the most interesting things that John said was that he has really been out of the Lisp community for quite sometime, instead devoting his time to the artificial intelligence research that first caused him to invent Lisp.

Interestingly, John said that he, like Baker, thinks that Common Lisp has ended up holding Lisp back. By defining an unchangeable standard for Lisp, well, it hasn't changed. And John definitely saw that as a bad thing. Several members of the audience argued with him a bit on that point, but he held his ground. At one point, he said the quote paraphrased in Markus's blog:

If someone was to drop a bomb on this building, it would wipe out 50 percent of the Lisp community. That would probably be a good thing. It would allow Lisp to start over.

That wasn't exactly what John said, but it was pretty close. Again, the main thing here was somebody with impeccable Lisp credentials saying that Lisp was far from perfect and in fact needed to do better than Common Lisp. The fact that Patrick, Henry, and John all argued essentially the same thing was a wake-up call, I think.

With that, the session ended and I headed home. There was an ALU meeting that was supposed to take place afterwards, but I was so tired that I didn't end up staying.

Hopefully, I got most of that reported correctly. Again, I apologize for the delay in getting this written.


Friday, June 24, 2005

Upgrades resolved 

I finally got my Slime/CLISP thing worked out. For whatever reason, it turns out that Slime 1.0 and 1.2.1 are not compatible with CLISP 2.33.2. When I did my upgrade, I took the opportunity to move from Slime 1.0 to Slime 1.2.1 and from CLISP 2.33.1 to CLISP 2.33.2. Previously, Slime 1.0 and CLISP 2.33.1 had been playing nicely. Last night, I had tried reverting back to Slime 1.0, thinking that this was basically a Slime issue. I figured that CLISP hadn't changed that much between 2.33.1 and 2.33.2 and so that couldn't have been the problem. But neither Slime 1.0 or 1.2.1 worked with CLISP 2.33.2. Of course, the problem is always the last thing you check. Tonight, I tried it with the older CLISP and things fired right up.

I haven't fully tracked down the problem, but it's somewhere in the Swank bootstrapping, probably something in the CLISP backend, somewhere. Basically, Swank gets loaded, but never creates the server socket. Slime is left polling for the temp file in which Swank should write the socket number used by the server so Slime can connect. Only that file is never written and Slime is left jilted at the altar. I have no idea if this is XP-specific or simply CLISP 2.33.2-specific.

Further debugging will have to wait. After a couple of days of ILC, I want to hack some Lisp!


Thursday, June 23, 2005

Lisp Bloggers Unite! 

The other day, Brian Mastenbrook was insightful enough to say, "Hey, everybody who writes a Lisp-related blog, let's get a picture together before this ILC 2005-thing is over with." Here is the result. The pudgy guy in the back with the yellow shirt, that's me. Of course, the camera added another 10 pounds to the 20 I already need to lose. Man, I'm glad Bryan O'Connor was standing a bit in front of me.

Markus was, of course, wearing one of his McCarthy shirts. He asked John for permission to keep producing them and sat in the front row of the lecture hall.

At one point, I did, very discreetly mind you, give John Wiseman a sniff. No detectable lemon smell from where I was standing. That's one theory disproven.

In all seriousness, I had a great time at ILC. Everybody was fun and we all share an interest in a fantastic programming language.

I was still not feeling too well the second day, but I managed to get through it. I'll do a full write-up of Wednesday's sessions tomorrow.


Upgrades... grrrrr... 

On Monday of this week, I finally got around to installing a larger hard drive into my 5-year-old laptop. The puny 10 GB, 4200 RPM drive just wasn't cutting it. I dropped in a 60 GB, 5400 RPM thingamajig. As a part of the upgrade, I figured I'd move to Windows XP rather than Windows 2000. The old installation had been there for five years and was starting to get rather unstable. Either way it was going to be a headache, so if I had to take the hit, I figured I'd go for Win XP.

Generally, I'll say that I like XP better than 2K on my laptop. It definitely has better logic for powersave and suspend/restore than did 2K. It returns from a suspended state much faster than did Win 2K. Further, the .11 wireless subsystem seems far better as well.

BUT... (and there's always a "but" in one of these stories, right?), I had to reinstall everything. I just got emacs fired up. I installed CLISP. Now I'm fighting with Slime to get it talking to CLISP correctly. Something is messing up when emacs is supposed to be making contact with Swank running in the Lisp process. I can't quite figure it out. The Lisp process buffer shows it starting up and loading/compiling all the various Swank-related files, but emacs stalls with a 'Polling "c:/DOCUME~1/Dave/LOCALS~1/Temp/slime.1284"' message.

Grumble...


Wednesday, June 22, 2005

ILC 2005, Tuesday evening 

Sorry for the delay on this report. My body shut down last night and I just couldn't get to it.

Okay, following Paul Dietz's talk about the Common Lisp test suite, I attended Arthur Nunes-Harwitt's talk titled Advice about Debugger Construction. This was an interesting talk about how Lisp debuggers should be implemented, following a basic principle of least astonishment for a user.

The next three sessions were plenary sessions. The first was Bert Halstead's Curl: A Content Language for the Web. In summary, Curl is a language that allows you to easily implement some of the applications that would otherwise use Java or AJAX-like techniques. The key idea is making web applications more performant by doing some level of processing on the client rather than just assuming the client is "dumb" and doing all processing on the server. In my view (not necessarily what Bert said), Curl is sort of a fusion of HTML, Javascript, and active/java server pages, all executed on the client, however. I didn't see anything in Curl that couldn't be implemented another way, but Curl surely makes things far easier than the equivalent using "standard" web technologies. In fact, Bert demonstrated an application that looked remarkably like Google maps, pulling down satellite data from a Microsoft web site.

The next talk was James McDonald's Correctness-by-Construction is in your future. James's basic thesis is that the size of software systems is rapidly increasing, past the point where we can actually verify their correctness using standard testing techniques. Rather than constructing software by writing standard algorithms in the future, James argues for a world in which specifications are translated into a formal language, from which "real" code is synthesized. James cited a bake-off done by the NSA where this method was substantially more effective at reducing defects than the more traditional UML-based design methodology.

The highlight of the day was J. Strother Moore's A Mechanized Program Verifier. Moore is the Moore in Boyer-Moore string matching and various other automated proving algorithms. That alone was worth the price of admission. Moore is, frankly, just a good speaker. His talk was very well put together. McDonald's talk was a great setup for Moore's and they both echoed similar themes. Moore, along with Boyer previously, has been working on automated theorem proving for quite some time. Using a subset of Common Lisp, called Applicative Common Lisp, Moore and his team have developed a system that can prove theorems, provided with a series of lemmas. Moore quickly made the case that this sort of mechanized proving has great commercial utility. Moore described Intel's debacle with the Pentium FDIV bug. This mistake caused Intel to take a ~$470M charge on their financials. AMD was about to introduce the K5 just after that and hired Moore's team to verify the floating point unit. This technique has been repeatedly used since that time at AMD to verify the floating point portions of their chip.

Moore also described work that has gone on with Sun to verify the behavior of the Java virtual machine. Moore's team has constructed a 700-page description of the JVM and has used that to verify various properties of the Java byte-code verifier. Additional applications included verification of various floating point processors (Power 4, K5, Athlon, and Opteron), verification of DSP microcode (Motorola), and verification of the security policy of Rockwell Collins's AAMP 7 microprocessor.

What made Moore's talk so fun was that he showed some simple live demos of the system. It was very interesting to see things run and theorems being proved. When Moore ended his talk, he got the closest thing to a standing ovation that you could receive.


Hey, I was feverish 

Okay, the one drawback to blogging is that it's pretty easy to make an ass out of yourself very quickly. Yes, of course, I mean Richard Gabriel, not Peter. Doh! Thanks, John.

If I have any excuse to save my honor it's that I came down with some flu thing yesterday evening and it was all I could do to avoid passing out in the last couple of sessions. Hence the lack of an evening report (which I'll get to in a few minutes). I'm feeling a bit better today, but still out of sorts. Sleep is a wonderful thing.


Tuesday, June 21, 2005

ILC 2005, Tuesday afternoon 

Well, I'm sitting here in an afternoon session. So far, this whole experience has been quite fun. After getting my Stanford parking permit situation straightened out this morning, I was able to catch the end of Jeff Shrager's How Lisp will Save the World session. Jeff did a great job of discussing Lisp in a biology context. I'm not a biologist and I still understood the overall direction of Jeff's talk, a great testament to him as a speaker. Additionally, he had a lively presentation style that kept me quite awake, even for an 8:00 AM session.

I attended three breakout sessions following Jeff's great session and I wished that I had gone to the other breakouts instead. I won't mention names to protect the guilty. The speakers generally spent a lot of time on setting up the problem without really describing the guts of the talk. Then everything was compressed into the last 10 minutes with lots of slides being skipped. One tip for presenters to any conference, please make sure that you provide good concrete examples of what you are talking about early in your presentation (like the second slide after the title slide would be good). If I'm not familiar with your problem domain, it's very hard to grok a lot of theoretical information in a 40-minute time slot. With a simple example in-hand, I can easily make the connection. A second tip is to quickly answer the "What's in it for me?" question that's going through everybody's head. ILC has a slightly academic feel to it and perhaps this is the normal style for such a conference, but as a newbie to the conference, this would let me enjoy even talks where I don't have extensive background.

I spent lunch with Peter Seibel, William Bland, and Ben whose-last-name-I-forget (sorry, Ben, I'm horrible with names). The lunch discussion was great. Nice discussion about Peter Gabriel's talk from the previous day, which I had missed. (Update: Okay, okay, Richard Gabriel. Sheesh.)

Following lunch, I attended Matthias Hölzl's talk, A Framework for Dynamic Service Oriented Architectures. In contrast to some of the weaker morning sessions, Matthias did a great job of getting through the material and communicating the main points. This was a challenge because his project actually uses Dylan and not Lisp, and so he included a short Dylan tutorial in the beginning such that people could follow the examples.

Finally, I just attended Paul Dietz's The GNU ANSI Common Lisp Test Suite. Paul is also a great presenter and did a great job of explaining the test suite and the rationale behind it. Personally, it's obvious that Paul's test suite is the greatest gift to the Common Lisp community in decades. The test suite is finding lots of bugs in all the various Common Lisp implementations, both commercial as well as open source. Assuming those bugs all get some attention from implementation developers, that's a great thing for application writers who are looking for consistent behavior. Unfortunately, to write a sizeable program in Common Lisp, you're still going to have to stray from the safety of the strict ANSI spec into implementation-specific libraries. In spite of this, the quality of CL implementations is being improved. Well done, Paul!


ILC 2005, Tuesday morning 

So, here I am at ILC 2005 at Stanford. It turns out the wireless connectivity is excellent, so I might be able to blog a bit during the conference.

This morning was a bit of a mess. I rushed out of the house first thing this morning and drove over from the east bay. I thought I'd be late, but it turns out I was a little early. I had some trouble getting ahold of a parking pass and getting my badge. The registration desk was a bit late getting started. Oh, well, that's what happens when you arrive on the third day as opposed to right up front. I managed to avoid getting a parking ticket and got situated.

It's a bit of a who's who here. Frankly, it's like seeing every Lisp-related person I have interacted with on the net, live, in one place. I saw Brian Mastenbrook earlier this morning. I saw John Wiseman a few minutes ago. I plopped down in a seat next to Pascal Costanza. Kenny Tilton just walked by. What fun. Sort of the Lisp equivalent of standing along the red carpet at the Oscars. Remind me to introduce myself...


Sunday, June 19, 2005

Back to it... 

Well, it has been about forever since I last penned anything on this page. My life has been a bit upside down the last few months. The company I founded almost five years ago ended up shutting its doors in April/May. I had originally left back in December, 2004. Sad. The reasons were many. Management mistakes. Investor issues. You name it, there was some of it at fault.

So, for the past few months I have been throwing myself into figuring out "what's next?" I'm currently doing some product strategy consulting for the high-tech industry here in the Bay Area. The good news is that I seem to be in demand, with multiple companies trying to recruit me into various positions. That's a nice feeling.

The downside is that I haven't done much with Lisp. It's all I can do to rebuild the SBCL RPM for Sourceforge every month.

Hopefully, that will change here over the next week. I bought myself a couple days-worth of attendence at ILC 2005. I'm looking forward to the various sessions. If anybody is attending and wants to meet up, I'll be there on Tuesday and Wednesday. Unfortunately, I had other commitments for today and tomorrow.

I don't know what Stanford's wireless capabilities are, but perhaps I can blog a bit from the conference.


This page is powered by Blogger. Isn't yours?