Saturday, November 27, 2004
I just tweaked the license to conform with the LLGPL rather than LGPL. See the COPYING file in the release package.
Wednesday, November 24, 2004
As a follow-up to my previous posting about REST and continuations, I found this today on Lambda the Ultimate. Anton van Straaten will be presenting at LL4 on the subject of REST and continuations; the presentation is titled Continuations continued: The REST of the computation. I wish I could attend. Perhaps they'll post the proceedings following the conference.
Update: The slides and video are posted here.
Monday, November 22, 2004
I just released a 0.6 version of my Resolver library. This release fixes one bug related to UFFI usage. For some reason the bug did not show itself on SBCL or CMUCL, but somebody found it while looking to port the code to another CL implementation. The biggest user-visible change is that the library now uses keyword symbols for all resource record types and other constants in the interface. So, instead of
(lookup "findinglisp.com" 'mx)
you now say
(lookup "findinglisp.com" :mx)
and the answer comes back as
((53369 T :QUERY NIL NIL T T :NO-ERROR 1 2 2 6) (("findinglisp.com." :MX :IN)) (("findinglisp.com." :MX :IN 86178 10 "mx01.1and1.com.") ("findinglisp.com." :MX :IN 86178 10 "mx00.1and1.com.")) (("findinglisp.com." :NS :IN 172578 "ns27.1and1.com.") ("findinglisp.com." :NS :IN 172578 "ns28.1and1.com.")) (("mx01.1and1.com." :A :IN 83865 #(217 160 230 13)) ("mx01.1and1.com." :A :IN 83865 #(217 160 230 11)) ("mx00.1and1.com." :A :IN 1237 #(217 160 230 10)) ("mx00.1and1.com." :A :IN 1237 #(217 160 230 12)) ("ns27.1and1.com." :A :IN 77450 #(217 160 224 3)) ("ns28.1and1.com." :A :IN 32577 #(217 160 228 3))))
Peter Coffee wrote an interesting column for eWeek today. It describes a completely bogus patent application submitted by the Visual Basic team at Microsoft. I completely agree with Coffee's assessment of the situation. This is absurd (and I hold five patents myself).
Interestingly, Coffee mentions Lisp, which he has a couple of times before in other columns. I wrote to him and asked whether he was a closet Lispnik and he replied,
Not "closet," I assure you. My proclivities for Lisp, Ada, and Java as languages in which I can actually think about the problem -- rather than the language -- are not meant to be secret in any way.
I'm a recovering Java programmer, myself, and have never touched Ada, but it's nice to know that people wielding the power of the pen have a soft spot for Lisp.
I also firmly believe in Coffee's last point, that service businesses must automate in order to go through any business transformation. My current company, Inkra Networks, is providing a virtualized network infrastructure solution that allows service automation on top of it.
I found this on OSNews today: an interview with Mike Deliman, a Wind River engineer who has worked on some of the NASA/JPL space probes. It's interesting to hear this side of things, given Ron Garret's (aka Erann Gat) view of Lisp at JPL.
Saturday, November 20, 2004
I was just perusing my web logs, as I do from time to time. It looks like a lot of you out there took my posting about the browser balance to heart. During the week of November 1, only 50% of Finding Lisp visitors were using Mozilla-based browsers (including Firefox) and 18% were using MSIE of some version. The week of November 15, Mozilla usage jumped to 66%, with MSIE dropping to 14%. There isn't quite a direct correllation. A bunch of other browser usage fell, and some of the percentages are affected by things like how many times the Google Robot reindexes the site. On Thursday and Friday, Mozilla usage was over 70%, which is clearly unusual.
Anyway, the general trend is clear: away from MSIE, and towards something else (almost anything else). That's generally good.
The scary thing is that some people are still using MSIE 5.5 and 5.0 instead of 6.0, as if that didn't have enough bugs and security holes. At least a couple of others are using old versions of Firefox which was previously named "Firebird." Old versions of Firefox are known to have some security holes, some of which are large. Now that Firefox 1.0 is out, I would upgrade to that.
Update: Here's another reference to the shift taking place. We aren't yet at a tipping point, but there's a bandwagon forming.
Friday, November 19, 2004
A humorous look at the shifting sands of the browser balance, courtesy of C|Net. Again, personally, I made the decision on every computer I use to go with Firefox or Mozilla. Whereas two years ago I would have been worried that sites wouldn't work with Mozilla, that just isn't true these days. Given the better standards support in Firefox, I would actually expect that to shift the other way as Firefox/Mozilla market share grows. The artistic crowd will keep advancing and soon you'll start to see those "Best when viewed in Firefox" sort of buttons again.
Thursday, November 18, 2004
I just saw a reference to Skribe on Lambda the Ultimate. Skribe is a text markup lanaguage, similar to what you would get with TeX or HTML, but done as a Scheme program. Unlike TeX or HTML, you thus have the whole power of Scheme available to you during the document generation process. Check out the Skribe link above for an example of Skribe markup. Generally, it's pretty clean.
This really struck home today because I have been looking at doing a bunch of writing on my Linux system and am trying to figure out what tools are available. I used to be a very heavy LaTeX user about 15 years ago, when I was working for HP and using HP-UX daily. The problem is, I really think the WYSIWYG paradigm is the Right Way(TM) to do things, long term. The "edit, compile, debug" cycle that TeX imposes is painful in the same way that it is when programming in C. Essentially, WYSIWYG is a sort of REPL for application programs, providing the user with instantaneous feedback as they manipulate the system state visually. Now back in 1989, you didn't have a lot of options on a Unix-based system unless you wanted to spring thousands of dollars for Interleaf. So, everybody used TeX.
Of course, in the last 15 years, things have changed substantially. There are good options like OpenOffice.org Writer and Abiword that provide good quality output in a WYSIWYG editing environment. In spite of that, TeX still shines in terms of its high-quality output.
Myself, I started writing a short document the other day in LaTeX. I did fine for the first couple of pages that were mostly just prose and lists of various sorts. LaTeX is obviously very HTML-like in concept and it's easy to go from one to the other. Then I hit a table that I had to generate. In short, LaTeX's table macros just suck. Yes, the output is nice once you figure out how to get what you want, but getting things like line breaking, column widths, shading, etc., to happen is just difficult. TeX kept complaining about overfull boxes, blah, blah. So, in the end, I cut and pasted all my text out of Emacs and into OOo Writer. In a few minutes, I had the tables the way I wanted them and went on.
I can see doing a long book or something in TeX. It's fairly easy for long stretches of prose and generates good output. For other stuff, however, it's pretty painful. Yes, I know that if I look on CTAN I can probably find 10 better table macro extensions that would have made my life easier, but who has the time.
Tuesday, November 16, 2004
It has been interesting to watch the rise of a new generation of browsers lately. While Microsoft sits on their hands with a security bug ridden IE 6, Firefox seems to have done reasonably well with its recent launch. Even Opera seems to be getting more press mentions and I have friends that use it daily. Myself, I switched from IE 6 to Mozilla 1.x about 18 months ago and then recently to Firefox.
One interesting part of the switch to better browsers is the improved support for CSS. When you really understand CSS, it's staggeringly powerful. I had seen some simple demos of it previously and they were just what you would expect--simple. Obviously, CSS can be used to change fonts, colors, etc. But recently, I found CSS Zen Garden and that really drove home how awesomely powerful CSS is. You'll probably want to view this site with a good modern browser (I suggest Firefox). Check out all the various themes that were created for the same content. Amazing.
Interestingly, more than half of Finding Lisp's readers use a Mozilla-based browser of some sort, and that figure is rising.
Monday, November 15, 2004
Whew! I managed to kick over another beehive the other day. My blog entry on Lisp RAD generated a slew of comments, some public and some private, as well as other blog entries with various other opinions.
After reading Nikodemus Siivola's blog entry, I have to agree with him. I overstated the point in my original posting when I said that Lisp was in worse shape than Java with respect to deployment. Nikodemus is quite correct that in some ways Lisp is in much better shape because public domain or liberal license (e.g., BSD, GPL) runtimes exist and can be packaged with an application to make a single installation package. If this is done right, the user never needs to be aware that the application is using Lisp. The main difference today is that there exists a standard installation package for Java (Sun's package, for instance). For Lisp, each developer distributing an application would have to do the packaging individually. Depending on how this is done, you might end up with conflicts (different applications dumping different versions of the runtime in /usr/bin/ or /usr/local/bin/, for instance). But, that's solvable.
In the comments section of the original posting, Chris Dean commented that Lispworks may in fact deliver some of this. Originally, I had thought that the Lispworks license was more restrictive, but that isn't the case. Purchase of the Professional or Enterprise editions allows you to redistribute programs with no restrictions. See the Lispworks web site for more information. The $999 Professional license is pretty reasonable.
In the interest of completeness, I checked on Allegro's license terms, which are better than I had originally thought they were, but different than Lispworks. It looks like Franz does allows redistribution of programs developed with Allegro and that Allegro has all the machinery in the IDE to make this easy. The Franz license states:
Franz Inc. permits you to distribute or deploy your application packaged as a standard runtime image using Generate Application (subject to the restrictions below in this section) for 12 months after you purchase the license for the Enterprise or Enterprise Platinum Edition of Allegro CL, and thereafter you may continue to distribute or deploy if you renew your Enterprise or Enterprise Platinum Edition of Allegro CL annually (please refer to section 9(d) of the Source Code and Support Addendum). If you do not renew, then you no longer have this right to distribute or deploy. [Emphasis mine]
So, with Lispworks, you buy the development system and you're done. With Allegro, you must keep renewing your Allegro license for the duration of time that you continue to distribute or deploy the application you created. This may or may not be problemmatic. Personally, I don't like my program being held hostage forever and would favor a Lispworks-style license over an Allegro-style. But that's me.
Overall, this situation is much better than I had previously thought: the Lispworks and Allegro IDEs have some tools to automatically package developed appliations easily (or so they say; I have not used either tool and don't have first-hand experience) and both licenses allow redistribution under varying terms.
Update: Okay, I really botched that. As some commenters have pointed out, I misread the Franz Allegro license. The above terms only apply for non-commercial uses of Allegro. If you want to make money using something written in Allegro, you need to contact Franz and negotiate unspecified license terms. I was surprised when I read the language above because I thought I had remembered that Allegro's license was fairly difficult. Now I know why and that I wasn't dreaming. So, to summarize, Franz wants you to keep paying them a yearly license fee for the duration of time that you distribute a free program and wants you to enter into negotiations to distribute a commercial program. Either Allegro is just the absolute cat's meow, or the guys at Lispworks win this hands down. Note that I don't begrudge Franz for charging what the market will bear, but I personally wouldn't do business under those terms.
Finally, there was a great comment by Mikel Evins discussing SK8, a Lisp-based RAD that was created a number of years ago at Apple, on which Mikel worked. Mikel has posted on his blog that he's wanting to recreate an open source version of SK8, called Skate. If you are interested in helping with Skate, contact Mikel.
Thursday, November 11, 2004
I was just reading about Gambas on OSNews today. It's basically a Visual Basic-like RAD development environment for Linux. It has a graphical designer and allows you to hook easily to databases. Finally, when your application is complete, the IDE creates an installation package for Mandrake, Debian, Redhat, or Suse. That rocks!
The obvious question is why doesn't something like this exist for Lisp? Lisp, it seems to me, is the ultimate RAD language. It would be really nice to be able to create a graphical program in Lisp and then have it all packaged up for easy installation on a non-Lisp-user desktop.
Today, Lisp seems to suffer from some of the same issues that Java does on the desktop, only worse. In the Lisp/Java world, deploying an application always starts off with downloading and deploying the underlying language runtime, before the application is deployed. The fact is, users don't want to know that a certain application is even developed in Lisp/Java/whatever; they just want it to run well. Heck, even experienced Lisp hackers are scared of a CMUCL build. As a result of this problem for Java, it has largely retreated back to the server where it dishes out web pages, etc. Lisp has the same problems, but Lisp runtimes are not as easy to deploy, are not very standardized in their interfaces to operating system, etc. At least Java has freely available desktop runtimes that work well across platforms. With the exception of perhaps CLISP, Lisp doesn't. Even the commercial Lisp implementations (which have licensing terms that all but prevent mass desktop deployment), while they have an Emacs-like IDE, don't seem to have what I would call a RAD environment (the difference is subtle--IDE to me just means integration of the editor, debugger, etc., whereas RAD implies a GUI design environment, deployment wizards, etc.).
As with any musing like this that gets overheard in the open source world, this posting will probably draw a lot of "okay, so quit griping and get coding..." replies. I'm hopeful that maybe somebody can point me at a Lisp RAD environment that I may have overlooked. Note that I'm not just looking for an IDE that integrates an editor an a REPL. Emacs/SLIME/SBCL do that perfectly for me right now. The perfect environment would be something like an SBCL base, hooked to a GTK+ API and RAD screen development application, with a push-button packager that will roll up an application into a single OS installation program, at least for Linux. In other words, imagine VB or Delphi using Lisp instead of BASIC or Pascal.
Wednesday, November 10, 2004
Recently, I had lunch with an ex-coworker who made the jump out of high-tech and into financial services as an investment manager. During the course of conversation, he showed me the trading analysis system that he had written in Java to allow him to try various investment strategies and perform different sorts of technical analysis on stocks and indexes. I got interested in doing some analysis myself and this seemed like a fun project to try in Lisp and see how it worked.
When you look at various functions that are common in technical analysis, you'll find that they are very similar at the macro level, but differ slightly at a micro level. For instance, you often want to operate on a sequence of closing prices and compute a function, a moving average for example, over those prices. This is ready-made for MAPCAR:
CL-USER> (mapcar #'a-function *prices*)
Given that a function like a moving average is parameterized with the number of samples in the average, I wrote a function that returns a function that computes a moving average with a specific period. Thus, it's easy to say:
CL-USER> (mapcar (moving-average 3) '(1 2 3 4 5 6 7 8 9 10)) (NIL NIL 2 3 4 5 6 7 8 9)
The function returned by MOVING-AVERAGE returns NIL for those data points that occur before it has seen the number of data points required for the average. In this case, since we need 3 data points for the average, the first two results returned are NIL.
But it's even better than that. When you use a function like a moving average with MAPCAR, you have to save some old data values that are passed to the function by MAPCAR such that they can be used later on in the calculation. I started to write MOVING-AVERAGE in the straightforward way that you'd expect when I realized that I was churning out a bunch of code to save the "historical prices." That code was common to any function, like a moving average, that need to save some data points for later. So I factored that out into a separate function called MAKE-HISTORY-FUNCTION that returns a function that saves some state and calls another function on that state each time it is updated. The code is probably easier to read than my explanation of it:
(defun make-history-function (periods evaluator) "Returns a function that takes a series of data points, one at a time, and returns the results of calling the evaluator with a sequence storing PERIODS data points. The function returns NIL before PERIODS data points have been evaluated. If the data point is NIL, the function returns NIL and does not store the data point in the history." (let ((history '()) (count 0)) (lambda (element) (if element (progn (push element history) (incf count) (if (>= count periods) (progn (setf history (subseq history 0 periods)) (funcall evaluator history)) nil)) nil))))
Notice how the closure that is returned keeps hold of the PERIODS and EVALUATOR variables. Closure make it so easy to parameterize the returned function without having to worry about creating extra variables in which to explicitly store that parameterization information.
So, MOVING-AVERAGE then simply becomes:
(defun moving-average (periods &key (key #'identity)) "Creates a function that computes a moving average over PERIODS data points." (make-history-function periods (lambda (history) (average history :key key))))
where AVERAGE is a function that simply averages all the data values in a sequence:
(defun average (sequence &key (key #'identity)) (let ((len 1)) (/ (reduce #'(lambda (x y) (incf len) (+ x y)) sequence :key key) len)))
Now, the neat thing here is that it's easy to create other functions that operate similarly to MOVING-AVERAGE. For instance, we can compute Bollinger bands like so:
(defun bollinger-bands (periods &key (key #'identity)) "Creates a function that computes Bollinger bands over PERIODS data points. The function returns a list with the average, upper, and lower Bollinger band points." (make-history-function periods (lambda (history) (let* ((avg (average history :key key)) (std (std-dev history :key key)) (2std (* 2 std)) (upper (+ avg 2std)) (lower (- avg 2std))) (list avg upper lower)))))
In this case, STD-DEV computes the standard deviation of a sequence (I won't show the definition here). Notice that BOLLINGER-BANDS just concentrates on computing Bollinger bands. All the mechanics of iterating through a sequence and saving data points are wrapped up in MAPCAR and MAKE-HISTORY-FUNCTION.
We can compute Bollinger bands for a really small data set as follows. Note that for real Bollinger bands you would typically use a 20-sample period.
CL-USER> (mapcar (bollinger-bands 3) '(1 2 3 4 5 6 7 8 9 10)) (NIL NIL (2 3.6329932 0.36700678) (3 4.632993 1.3670068) (4 5.632993 2.3670068) (5 6.632993 3.3670068) (6 7.632993 4.367007) (7 8.632994 5.367007) (8 9.632994 6.367007) (9 10.632994 7.367007))
We can now easily develop other functions like the stochastic indicator:
(defun stochastic (key-high key-low key-close &key (%k-periods 5) (%d-periods 5)) "Creates a function that computes the stochastic indicator over the specified number of periods for %K and $D. This function requires access to high, low, and closing prices for all data points and so accessor functions must be passed to the function to retrieve the data from each sample." (let ((%d-history (moving-average %d-periods))) (make-history-function %k-periods (lambda (history) (let* ((high (maximum history :key key-high)) (low (minimum history :key key-low)) (close (funcall key-close (elt history 0))) (%k (* 100 (/ (- close low) (- high low)))) (%d (funcall %d-history %k))) (list %k %d))))))
Note how the MAKE-STOCHASTIC function uses a moving average under the hood to do some of its work.
The things I really noticed about doing this exercise in Lisp versus some other programming languages I have used in the past:
- Lisp makes it very easy to refactor and extract common functionality into separate functions. Because function composition is so simple in Lisp, this can be done at a much finer granularity than with other programming languages. In this case, the functionality of list traversal, storing historical data points, and actually computing the underlying technical functions are all in separate functions that come together to calculate what we want.
- Lexical scoping and closures simplify things greatly. Being able to just reference lexical free variables and have the right behavior occur under the hood is a huge time saver.
- Having the list data type built-in with a standard printed representation was perfect for interactive testing. Just hop to the REPL and input a test case and check the output. Very easy.
- This would have been a lot more code in Java. Part of that would be caused by the intrinsic verbosity of Java. The rest of it would be caused by the inevitable object decomposition that would have been done. Rather than just using lists of numbers to represent a stream of price quotes, I'm sure I would have created a
Priceclass and others that would have ballooned the code substantially.
Next on tap is to take all this textual information and start drawing some graphs with it.
Tuesday, November 09, 2004
When I first got turned on to Lisp, I quickly discovered both Paul Graham's web site and Chris Double's blog. Since that time, I have been thinking about web design in Lisp. Both Paul and Chris have, at times, espoused the idea that it's pretty cool to use continuations [and here] (or at least closures in Paul's case) to program web applications. You get a nice programming model that is very much like a "normal" application where you can effectively present a series of screens to the user and pick up input data while programming in a very linear way.
A few weeks ago, I found some articles on REST -- representational state transfer. REST is the name of an architectural style that was coined by Roy Fielding in his PhD dissertation to describe the way the web works. Essentially, Fielding argues, the web works by returning to a client a set of representations (HTML pages, typically, but not necessarily) that describe the current state of a web resource named by a URI. Each representation includes links to other interesting resources somehow related to the current resource and the client may access those resources using those links. If you look through either Fielding's dissertation or most of the other REST writings on the web, you'll find a whole bunch of buzzword-compliant, but ultimately vacuous, language about REST and how good it is. I finally boiled it down to the following key points, with the help of some of Paul Prescod's articles and website:
- HTTP is a very general, scalable protocol. While most people only think of HTTP as including the GET and POST methods used by typical interactive browsers, HTTP actually defines several other methods that can be used to manipulate resources in a properly designed application (PUT and DELETE, for instance). The HTTP methods provide the verbs in a web interaction.
- Servers are completely stateless. Everything necessary to service a request is included by the client in the request.
- All application resources are described by unique URIs. Performing a GET on a given URI returns a representation of that resource's state (typically an HTML page, but possibly something else like XML). The state of a resource is changed by performing a POST or PUT to the resource URI. Thus, URIs name the nouns in a web interaction.
The REST crowd says that these principles are what make the world-wide web the most scalable architecture ever built. Indeed, when you follow these principles, the overall web architecture and infrastructure is working with you, not against you. For instance, caching happens at many points in the web (client, intermediate nodes, and possibly in front of the server in the form of reverse proxy caches). If each resource is uniquely identified by a URI, you don't have problems with the browser back button and people can easily share URIs with others using cut-and-paste from the browser location bar. Because HTTP is a very loosely coupled, late-bound, general-purpose transfer protocol, clients and servers can evolve without the other end of the wire also having to change. Finally, intermediate nodes can interact with data traveling between client and server and participate in the protocol to optimize performance or other characteristics.
When you violate these principles, at least some of the web infrastructure shuts itself off or otherwise isn't working for you. REST specifically argues that the following architectural items are problemmatic in web application design:
- Applications that use server-side state don't scale as well as those that don't. The server-side state must be stored on servers and protected from loss in the event of a server failure if the application is going to be resilient. Further, the unique mapping between a URI and its representation may be modified by this state and thus fewer pages are cachable (since a cache doesn't know what state is on the server, the server must mark pages as non-cachable so other clients don't see the wrong information when accessing the same URI).
- A corrollary to this is that server-side authentication state should be eliminated. REST advocates would argue that standard HTTP authentication should be used since it is included with all HTTP requests for a given object and therefore allows the server to be stateless.
- Personalization is a problem. It relies on server-side state to create the personalized pages and they are not cachable since the URI is often the same across multiple clients.
- The typical interactive web application uses URIs as verbs (think Java Struts with its xxx.do URIs). Effectively, this moves the actions in a web app into the URI namespace and doesn't allow intermediate nodes to participate in the protocol. The intermediate nodes understand HTTP methods (GET, POST, PUT, DELETE) but don't understand xxx.do.
REST can be applied to both interactive (browser-based) applications, as well as web services. My take is that there are some drawbacks to applying a pure REST architecture to an interactive application. I think that you can apply much of REST and you'll end up with a great system if you do, but limiting yourself to HTTP authentication will make your application look like it's straight from 1995 (i.e. ugly as sin--you might as well give all your pages a gray background and times-roman fonts). That said, the principles that make up REST are sound and it is advantageous to follow them when you can.
For grins, I took a look at Amazon.com and it's remarkably REST-like for most of the overall interface. Have you ever noticed how you can forward an Amazon link to anybody via email and it just works? There is a little bit of magic happening on the server, but most of it works because of REST principles. Now, Amazon obviously has a lot of customized pages. Rumor has it that those cost them dearly, too, in terms of scalability. They require a lot more server resource to serve that up and the content isn't as cachable as it could be. In their case, however, I'm sure they sell a lot more merchandise because they include that personalization.
From what I can tell, though, REST really shines when creating web services. Indeed, many of the REST resources on the web are devoted to describing why REST makes a far better web services infracture than something based on SOAP. After reading many of these, I think I have to agree. The combination of REST+XML is powerful for a general-purpose web services infrastructure. In fact, I think that REST could be applied to distributed Lisp applications by substituting sexprs for XML and it's far better than having to deal with SOAP and all its baggage. Another realization I came to is what a horrible, aweful thing SOAP is. Simply put, out of control.
So what does this have to do with continuations and web programming? Simply that it seems like continuation-based programming might have some pretty big scalability problems if used too much on a high volume site. Does that mean continuation-based programming is bad? No, just that, like everything, you have to know when to apply it and when you're pushing it beyond its sweet spot. In particular, it seems suited for certain portions of an interactive web application, but probably would not be good to use in a web service design. Further, I'm pretty convinced that programmers should spend more time learning about state machines and how they work. Most of the interactive parts of a web application can be modeled as a state machine. With the syntax transformations afforded by Lisp macros, it should be possible to design event-driven web applications fairly easily and not require the saving of so much continuation state.
- Only the store editor was written in Lisp. The rest was basically C. This means that only direct Viaweb customers (merchants) actually used the Lisp portion of things.
- Once a merchant got the site design the way they wanted it, the system generated the HTML for what was basically a static web site with some CGI hooks. End customers interacted with this. All dynamism at the time of final presentation (the shopping cart) was done using old-school fork-and-exit CGI written in C.
- When merchants were editing a store, the system would create an entire Lisp process for each merchant. This process was started and stopped for each editing session.
- Closures were used to generate actions for various links in the editor. Each link was created dynamically using Lisp code with a unique (random) ID parameter. The IDs were used as keys to store the closures in a hash table. When the user clicked on a link, the server would hand control to the closure which would generate the next page.
- Interestingly, Paul said that the closures for each page were deleted when the next page was served. If the server received an ID number that it didn't understand it sent the user back to the "current" page. This meant that if a customer used the back button and clicked on a link, the application would respond by simply taking them back to where they were before they hit the back button. The only way to really interact with the application was through links on the current page, not using the browser navigation controls. I found this very interesting because one of the main interests in using continuations for web programming is that they solve the "back button problem" in a fairly graceful manner.
So, in the case of Viaweb, Lisp was used for the heavyweight portion of the site and interacted with by a relatively small number of merchants (hundreds, not hundreds of thousands). The data built up there was then used to generate a static site that interacted with CGI scripts to implement the shopping cart itself.
Where is all this going? I'm not quite sure yet. Clearly web application architecture has evolved a lot since Viaweb was founded in 1995. Things like fork-and-exit CGI scripts are a thing of the past on a high volume site, with FastCGI being the minimum for modern efficiency. But it seems like there are some things to be learned from the REST style, too.
Finally, it's important to note that most REST advocates are positioning REST for web services interfaces as an alternative to SOAP/UDDI/etc., not necessarily as the style to use for interactive web applications. That said, the fact that Amazon uses it is very interesting.