Friday, April 30, 2004
Thursday, April 29, 2004
Daniel Barlow's blog says that CLiki got vandalized the other day and so a bunch of pages had to be restored. It looks like the asdf-install Resolver page got blown away during that, so if you get an error when trying to use asdf-install, try:
in the mean time. Daniel has temporarily halted updates to CLiki to wait out the vandal.
In other news, it looks like the CMUCL problem reported by Dave Pearson is still a problem. For some reason, CMUCL is not executing a
(uffi:load-foreign-library #p"/usr/lib/libresolv.so" :supporting-libraries '("c"))
form when it loads a compiled file (.x86f for CMUCL). I have no idea why and have appealed to the great Lisp gurus on c.l.l to help me out. If you have any ideas, check out the thread there and let me know.
Wednesday, April 28, 2004
Well, my first attempt at ASDF packaging was a bit less than a total success. Okay, okay, I messed up. ;-)
I had been doing some editing of my source files in Emacs and had run the packaging procedure in SLIME before I had remembered to save all the buffers. As a result, resolver.lisp had a bunch of old package junk left in it that should not have been there. I have just created a version 0.2 and uploaded it. You should be able to download it via ASDF using the previous instructions. Apologies to anybody who ran across this and a big Thanks to Dave Pearson to finding a problem and reporting it. For some reason, when I did my own test with SBCL, things worked. Dave was using CMUCL which was having problems.
Tuesday, April 27, 2004
With some great help from Nikodemus Siivola and Rahul Jain, I was finally able to get my resolver library packaged up as a simple 0.1 release. Resolver has dependencies on UFFI. It's automagically installable using asdf-install using the following incantations.
* (require 'asdf) * (require 'asdf-install) * (asdf-install:install 'resolver)
Check out CLiki's asdf-install page for more information.
You should then be able to use it in SBCL as follows.
* (require 'asdf) * (require 'resolver) * (use-package 'resolver) * (lookup "www.findinglisp.com" 'mx) ;; etc...
If you use Resolver in any interesting projects, I'd love to hear about them.
Chris Johnsen kindly pointed me to a free service that converts Atom feeds to RSS for those of you who only have RSS readers. The appropriate link is over to the left side. Thanks, Chris!
Bill Clementson summarized a nice long thread on comp.lang.lisp where people were commenting on when they learned Lisp. The interesting thing is the number of us that have found Lisp after more than 30 years of life. For me, I'll turn 37 next month and started programming seriously in Lisp at the beginning of this year (2004). The c.l.l thread started when Duane Rettig noticed a line in Thomas Schilling's signature quoting Paul Graham: "But I don't expect to convince anyone (over 25) to go out and learn Lisp." Duane stated that he started at age 31, and then everybody else piled on.
I think that Paul Graham was intending to say that people, once set in their ways, are less able to be convinced about new things ("You can't teach an old dog new tricks," and all that). The interesting thing is that Lisp seems to cross that divide. In fact, I sometimes think that Lisp appeals to older programmers who have tried all the rest and found them lacking. With the perspective of many different programming languages and an understanding of what really matters to help productivity, the seasoned (wow, is that what I am now...?) Lisp idiosyncrasies like lots of parenthesis are more likely to be viewed as assets than annoyances. In short, after you have tried a bunch of solutions that don't work, you're jaded about marketing claims, and you just want to get the job done, not be cool and follow the herd, Lisp is there.
That's what I did. I found Lisp.
PS: it was interesting to note that Bill titled his blog entry "Coming to Lisp." Exactly.
Saturday, April 24, 2004
A couple folks wrote me to ask about feeds for this blog. Blogger.com makes it easy for me to add an Atom feed (just a checkbox in the config), which I had done previously. You can get to it via the new link over on the right column. Most aggregators seem to deal with Atom format okay. I have not tried my hands at RSS yet as Blogger.com doesn't offer it straight away. I also read about Bill Clementson's woes with the same issue and decided that I'd leave that one for another time. Let me know if your aggregator can't handle Atom-format feeds; if there is enough demand for RSS, I'll see what I can do.
I was traveling in Europe this week for business, London for three days and Paris for two. We actually had fantastic spring weather in London toward the end of the week. Absolutely fabulous! My wife's sister's family lives in Berkshire (expat Americans) and I got to see them Friday evening. We actually sat outside in the back yard and had nice BBQ.
In the course of flying over to Paris, I bought a book at Heathrow called In Code: A Mathematical Journey, by Sarah Flannery. Sarah was a teenage student in County Cork, Ireland, who's father is a math professor. She ended up doing some work on public key encryption schemes for a science faire she entered and won several large prizes world-wide. What is particularly interesting are some advances she made with respect to PK encryption using a matrix-based encryption scheme she later named the Cayley-Pursor algorithm. The CP algorithm is faster than RSA because its fundamental operations are based on multiplication, rather than exponentiation (RSA is notoriously slow, but regarded as being quite secure; an algorithm that would be just as secure but faster is highly interesting). The book recounts the story of her adventures with the media after she won several of these prizes. It also gives a nice introduction to number theory that is gentle enough for most anybody as she develops the project. While Sarah's algorithm was later found to have a couple of weaknesses that make it not as secure as RSA, it's all quite interesting.
Generally, Sarah seems like a very well-adjusted teenager (now in her 20s and at Cambridge) from a pretty well-adjusted Irish family. You'll find this book interesting if you like, math, encryption, or just good human interest stories about people and the world.
Anyway, the Lisp tie-in here (you know I had to get to that at some point), is that I was reading parts of this in the United Airlines lounge in Heathrow today (Saturday here in California, though it feels like Sunday to me this minute... ;-). In particular, there is a section on Fermat's Little Theorem. This theorem basically allows you to quickly test for whether a number is composite (is not prime) without having to try to factor it. Basically, you calculate 2n-1 mod n. If that equals anything other than 1, you have a composite. If it equals 1, the number may still be a composite and you'll have to do more tests. Anyway, the book gave an example of testing whether the number 11111 is composite (it is: 41 * 271 = 11111). When you evaluate 211110 mod 11111, you get 10536 instead of one. So I just had to try this in Lisp. I fired up my laptop running CLISP and in the blink of an eye (this copied from SBCL):
* (mod (expt 2 11110) 11111) 10536
It's so nice to have a programming language that actually handles that without so much as a wimper.
Sunday, April 18, 2004
I just finished Object-Oriented Programming in Common Lisp: A Programmer's Guide to CLOS, by Sonja E. Keene. Here are my thoughts:
- The book is pretty expensive. It's fairly thin at just 266 pages, including the index, but the price is $39.99 USD. The amount of information you get for your money is pretty small. This may or may not bother you.
- The book covers CLOS in good detail, showing how to use various features like generic methods, generic dispatch, class inheritance, and initialization.
- The examples used in the book are fairly simplistic. They lack some of the meat that you'd expect in a book like this. There are two fundamental examples used throughout the book: a set of mutex locks, and a set of streams classes. I found the mutex locks to be particularly humorous since Common Lisp doesn't standardize threads. This isn't to say that the classes aren't useful as a learning tool, but they can't effectively be used in a standard CL implementation like SBCL, CMUCL, or CLISP without some amount of modification.
- The book has a nice reference section toward the end that details all the CLOS syntax for various forms. I think that this is the most useful part of the book, long term. The book is thin enough to be kept at the ready on a shelf close to your keyboard and pulled out for this section alone.
- The book is fairly old. The copyright date is 1989. No updates have taken place since that time. I don't think that CLOS has changed much, but I think the book is a bit dated. Some of the examples show a bit of history associated with the author's employer at the time, Symbolics.
In short, I found the book underwhelming. On a scale of 1 to 5, I'd give it a 3. There are other introductions to CLOS that seem to give you a reasonable introduction without the high price. If you want to understand some of the more advanced CLOS programming techniques, like :before, :after, or :around, or method combination types, then this book has something good to offer. If you just need a short introduction to the basics, I think you could check out resources on the web and save your money.
Saturday, April 17, 2004
Does anybody know of a source for a good set of blog themes? I really don't like any of the prefab ones provided by Blogger.com. That said, after a quick survey, I can't really find anything else that looks that much better. If you know of something, drop me an email. I'm going to be traveling all next week in Europe for business, so if I don't respond ASAP, please don't take offense. I'll try to figure out a better look for this site in the coming weeks.
I'm putting the finishing touches on my resolver library. This is an FFI library that interfaces with Linux's resolver.so. This allows you to do random DNS queries and returns the DNS reply packet as a set of lists. For instance:
* (lookup "www.findinglisp.com" 'a) ((3254 T QUERY T NIL T T NO-ERROR 1 1 2 2) (("www.findinglisp.com." A IN)) (("www.findinglisp.com." A IN 10800 #(217 160 226 69))) (("findinglisp.com." NS IN 172800 "ns27.1and1.com.") ("findinglisp.com." NS IN 172800 "ns28.1and1.com.")) (("ns27.1and1.com." A IN 113098 #(217 160 224 3)) ("ns28.1and1.com." A IN 157740 #(217 160 228 3)))) * (lookup "www.findinglisp.com" 'mx) ((3255 T QUERY T NIL T T NO-ERROR 1 2 2 8) (("www.findinglisp.com." MX IN)) (("www.findinglisp.com." MX IN 86400 10 "mx01.1and1.com.") ("www.findinglisp.com." MX IN 86400 10 "mx00.1and1.com.")) (("findinglisp.com." NS IN 172786 "ns27.1and1.com.") ("findinglisp.com." NS IN 172786 "ns28.1and1.com.")) (("mx01.1and1.com." A IN 71326 #(217 160 230 11)) ("mx01.1and1.com." A IN 71326 #(217 160 230 12)) ("mx01.1and1.com." A IN 71326 #(217 160 230 10)) ("mx00.1and1.com." A IN 71335 #(217 160 230 12)) ("mx00.1and1.com." A IN 71335 #(217 160 230 10)) ("mx00.1and1.com." A IN 71335 #(217 160 230 11)) ("ns27.1and1.com." A IN 113084 #(217 160 224 3)) ("ns28.1and1.com." A IN 157726 #(217 160 228 3)))) * (lookup "aol.com" 'txt) ((3256 T QUERY T NIL T T NO-ERROR 1 1 4 4) (("aol.com." TXT IN)) (("aol.com." TXT IN 300 "©v=spf1 ip4:22.214.171.124/24 ip4:126.96.36.199/24 ip4:188.8.131.52/24 ip4:184.108.40.206/23 ip4:220.127.116.11/24 ip4:18.104.22.168/23 ip4:22.214.171.124/24 ptr:mx.aol.com ?all")) (("aol.com." NS IN 3600 "dns-01.ns.aol.com.") ("aol.com." NS IN 3600 "dns-02.ns.aol.com.") ("aol.com." NS IN 3600 "dns-06.ns.aol.com.") ("aol.com." NS IN 3600 "dns-07.ns.aol.com.")) (("dns-01.ns.aol.com." A IN 3600 #(152 163 159 232)) ("dns-02.ns.aol.com." A IN 3600 #(205 188 157 232)) ("dns-06.ns.aol.com." A IN 3600 #(149 174 211 8)) ("dns-07.ns.aol.com." A IN 3600 #(64 12 51 132))))
To interpret all this, you probably want to read the DNS spec, RFC 1035. The first sublist represents the header section of a DNS reply message. The next four sections are the query, answer, name server, and additional sections, respectively.
Anyway, this was done with about 500 lines of Lisp, using UFFI to interface with resolver.so. I'll post a fully-baked version of this as soon as I can pull it together. I still need to learn a bit about packaging as I'm not sure I have all the packaging and loading forms built correctly.
So I went in and checked my weblogs this morning. Looks like both Lemonodor and Planet Lisp found my blog (how did they do that?). Looks like Lemonodor found it first and then Planet Lisp picked it up, too. The visitor count skyrocketed this week. Gosh, I feel almost famous. I mean, I have been reading Lemonodor for quite some time now. John Wiseman is so interesting that I had to even go back and read about a year's worth of archives, too.
Both Lemonodor and Planet Lisp also point at Jack Coleman's blog, Programming Languages, a Journey. Jack's blog is very similar to mine in terms of focus, but seems to cover a bit broader set of topics, at least from the title. Ironically, we both seem to have started at roughly the same time (last week). Is it true that good ideas run in streaks?
Thursday, April 15, 2004
I flew home last night from a business trip and was reading Lisp in Small Pieces on the plane. Needless to say, I don't think the pieces were quite small enough. I was going through the chapter on denotational semantics and I think my head exploded or something. Very confusing. If you find continuations a bit strange (I still have a bit of a time wrapping my head around them), then imagine more than a few pages implementing a Scheme interpreter using continuations and lots of lambda calculus written in full Greek letters! Zoinks! I'll have to re-read it again in a couple of months. I'm sure the next time around will be much easier.
Wednesday, April 14, 2004
I have been reading Christian Queinnec's Lisp in Small Pieces over the past couple of weeks. It's a fascinating book that goes through the implementation of a few Lisp implementations (Scheme mostly, but various CL features are also covered in good detail). It covers both interpreters and compilers and brings out a lot of issues with the implementation of various features. For instance, it has a whole chapter devoted to the Lisp-1 vs. Lisp-2 debate and shows how each of the various systems is implemented behind the scenes. It covers continuations extensively. The book is pretty expensive (~$75 USD) but it's also very deep. WARNING: heavy reading! Non-trivial coverage of things like lambda calculus and denotional semantics. If you want a book to teach you how to program in Scheme or Lisp, this is not it.
Tuesday, April 13, 2004
I found an interesting 3-part article by Chris Double on using continuations for web programming. It's an interesting read. Chris uses Scheme which has first-class support for continuations. Common Lisp doesn't support continuations, but it's still an interesting read. Some of the ideas can be simulated in Common Lisp in other ways. I'm going to be doing a bit of a project here shortly to compare the productivity of a CL-based web program with the Servlet/JSP/Struts models I'm also familar with. It will be interesting to see what comes out on top.
I have posted some links on the side of this blog that will help you put together a free, first-class Lisp programming environment for Linux. It consists of SBCL, Emacs, and SLIME. I have been using this setup for a couple of months now and find it quite productive. I routinely have an SBCL session running in Emacs/SLIME for days, adding and correcting function definitions as my kids allow me some time.
Wednesday, April 07, 2004
It's always difficult to write the first post of a new blog. Where do you start? What do you say? Will anybody actually read this and care anyway?
Well, in the spirit of just charging forward and getting on with it, let me describe Finding Lisp and why I created it.
I have been programming for more than 25 years. During that time, I have worked with a large number of programming languages and systems. Like many who came to programming during the 1970s, I first learned BASIC. This was followed over the years by assembly language, Pascal, Forth, Fortran, C, Scheme, C++, Perl, and finally Java. Through the years and this set of experiences, I have learned a lot about the art of programming and the tools a programmer uses to practice this art. Over time, these experiences have cultivated a philosophy of programming. It looks something like this:
- Programming tools, and specifically programming languages matter. You can cut down a tree with a variety of tools, but a good, sharp saw will do the job much faster and easier than a butter knife. BASIC is a butter knife.
- There are "natural speed thresholds" in the world, mostly associated with the limits of human perception. What I mean by this is that invariably when discussing programming languages, developers will bring up the issue of speed. Specifically, somebody will say, "Yea, language X is great, but it's slower than language Y, so I use language Y instead." Typically, somebody comes up with a micro-benchmark that shows that language Y does indeed take 100 microseconds less time than language X to perform some task. The question is, who cares? There is "fast" and then there is "fast enough." As long as the user perceives the operation as being fast enough, there is only marginal benefit to being faster.
- With Moore's Law in effect for decades now, the average computer is fast enough for most every common task. This is not to say that this is the case for every task or even that it will remain the case forever.
- Now that we have achieved "fast enough" for a large variety of tasks, it makes far more sense to say, "What can we use all these spare CPU cycles for?" In my opinion, we "spend" those cycles making programmers more productive. While Moore's Law has given us oodles of transistors to play with and computer processing speeds have ramped exponentially, the wet-ware inside a programmer's head is basically the same as it was thousands of years ago. People can only juggle complexity to a fixed extent. Note that the same things have happened previously. The move away from assembly language to higher level languages like Fortran, Pascal, and C came with the recognition that it was worth "spending" a few of those precious CPU cycles on a compiler that would allow programmers to write code more productively.
- Now the question is, if we agree that we can spend some cycles to make programmers more productive, what should that language be? Throughout my programming career, it has been interesting to watch languages evolve. I started with BASIC and ended up last at Java. Java has many of the hallmarks of a good modern language. It "spends" CPU cycles to make programmers more productive and largely succeeds. I am far more efficient as a programmer working in Java than I ever was in C++. Yes, my programs may run fractionally slower, but it's hard to tell on a modern CPU unless you were to compare two versions of the program side-by-side. While Java is not perfect, it has gone a lot farther than many other languages. The question is, what next?
- It seems to me that many languages are adding features that are already in existence in Lisp. For instance, Java and the various scripting languages like Perl, Python, and Ruby all have dynamic memory management ("garbage collection"). Languages like C++ have added templates to try to add something resembling a good macro system. These concepts have all been working quite well for decades in Lisp. So if most other languages are evolving towards Lisp, why not just use Lisp and get there faster?
Finding Lisp is a chronicle of my journey to learn and utilitize Lisp. I hope it will become a guidepost for newbies interested in Lisp as well as a resource for those that decide to take the plunge.