|Column Tag:||Lisp Listener
(Mumble Paul(About Lisp))
By Paul Snively, Wheeling, IL
Note: Source code files accompanying article are located on MacTech CD-ROM or source code disks.
Its been a while since anyone has written anything about any dialect of Lisp or, for that matter, anything that has anything to do with Lisp in MacTutor. This might lead one to the conclusion that nothing of any importance is going on in the Lisp world, or with Lisp as it relates specifically to the Macintosh. This is somewhat saddening, because nothing could be further from the truth.
The most obvious and most visible thing thats happened in the Lisp world relative to the Macintosh is that Apple Computer, Inc. purchased the assets of Coral Software, Inc. in a move that appears to have surprised several people, myself included. Its quite rare for Apple to out-and-out buy a third-party software-development house; my interpretation of the acquisition is that it serves to prove Apples corporate commitment to many different kinds of artificial intelligence research with respect to future system architectures, and it also serves to prove that Apple uses Allegro CL on the Macintosh a great deal internally (Apple seems to be a stickler for having control of the development systems that they themselves use internally, which is why MPW exists in the first place).
I wont go into great detail in this space about Allegro CL [for details, see MacTutor, March 1988]. Suffice it to say that the current version, as of this writing, is 1.2.2, and that when it ships through APDA it will include the base language, the Foreign Function Interface that will allow linking MPW C or Pascal code into your Lisp program, and the Standalone-Application Generator that will allow you to turnkey your Lisp program. Also available from APDA for twenty dollars will be the current version of Portable CommonLOOPS, about which Ill have more to say a bit later.
The other big news in the Common Lisp community on the Macintosh is that ExperTelligence, Inc. has licensed Procyon Common Lisp from Procyon Research, Ltd. in the United Kingdom for distribution in the United States of America. The Procyon implementation has a great deal going for it, not the least of which is an almost religious adherence to the specification of Common Lisp set forth in Steeles Common LISP: The Language, hereafter called CLtL, which is the canonical description of the language published in 1984. This is a two-edged sword: there is a fair amount of Common Lisp source out there that presumes a somewhat more liberal interpretation of CLtL than the Procyon implementation supports. This is rarely, if ever, damaging (the only example Ive come across in actual practice is that FIND-SYMBOL wants an honest-to-goodness package object as its optional second parameter, whereas Allegro CL, and apparently most Common Lisp implementations, will allow a package name to be passed).
Another thing that can be considered a plus of the Procyon system is that it seems to know a fair amount about the Macintosh as far as data structures are concerned. For example, Procyon includes a pretty nifty structure editor. A structure editor, as opposed to a text editor, deals with Lisp data structures in the heap, rather than their textual representation in a file. One of the interesting things that it does is to treat bit-vectors, one of Common Lisps data types, as bitmaps, providing a fatbits-style editor to modify the contents of the bit-vector. Its touches like this that make the Procyon system a pleasure to use.
In terms of system performance, Procyon and Allegro subjectively seem very close. Both companies, upon hearing that I intended to write about the two systems, made comments about how performance could be tweaked out of their systems, but Im going to adopt here the attitude that there are really only two levels of performance for any given system: good enough and not good enough. Its a subjective measurement anyway; FullWrite Professionals performance is good enough to me, but I know a great many people who do not use it precisely because its performance is not good enough to them. To me, both Allegro CL and Procyon Common Lisp perform well enough.
Another thing about Procyon that you may or may not appreciate is the fact that it comes with a two-volume set of documentation, and that Procyon Common Lisps DOCUMENTATION function is indexed to the printed documentation (although, happily, lambda lists for functions are still available online). Its a good thing that Procyons documentation is so comprehensive and large; the implementation offers quite a few extensions above and beyond CLtLs standard definition.
Procyon includes several examples, both of generic Common Lisp programming and of several of the Procyon-specific extensions, such as the so-called Common Graphics system that implements Procyons interface to the Macintosh Toolbox and OS. Oddly, Procyon, an ostensibly complete Common Lisp implementation, does not ship with CLtL, a must for any Common Lisp programmer, nor does it ship with any of the standard Common Lisp-based texts such as Winston and Horn. Its my opinion that this somewhat mars the educational value of an otherwise well-documented system.
In the 2.0 release that I received for evaluation, Procyons text editor is limited to a maximum of 32K, a limitation that both Procyon and ExperTelligence assure me will be removed, if it hasnt been already by the time of this writing, or by the time it reaches the shelves. Im also told that some items, such as a file compiler and an implementation of the forthcoming Common Lisp Object System (CLOS), are under development and will either be integrated into the system or available separately for a modest charge.
Speaking of CLOS, Procyon 2.0 does not ship with any kind of object system, meaning that it cant easily be extended to support new types of OS and/or Toolbox data structures and functions. In fact, there is a fair amount of documentation given describing how to create custom stream types, since most Common Lisp I/O revolves around streams. Since there is no object system, adding new stream types seems needlessly complex and unwieldy, at least when compared to Allegro CL, which allows you to define subclasses of the basic *STREAM* class and then simply define a few methods to have a new stream type (a good example of this is given in Allegros Serial-Streams.Lisp source file, which defines a stream type that allows Allegro CL to do stream-oriented I/O through the serial ports).
Both Allegro CL and Procyon Common Lisp are good, solid, reliable implementations of the Common Lisp standard, a standard which seems to be of increasing importance in the world at large and the Macintosh world in particular. Both will at least consider running on a two-megabyte machine, although realism dictates at least four to get anything accomplished, and, as with all systems, eight is something of a dream. Procyon offers a few more tools, such as the structure editor, and seems somehow Macier. Allegro offers an object system, easier Toolbox and OS access, and feels Lispier. Allegro CL also now benefits somewhat from being supported by Apple Computer, Inc. Individual preference will be the final arbiter in terms of which system you find yourself using.
[In the late-breaking news department, ExperTelligence will have Procyon Common Lisp 2.1 available by the time you read this. 2.1 will remove the limitations noted above, and as a product line is divided into three different levels, as shown:
Personal Professional Developer
Retail: $620.00 $1250.00 $2000.00
Education: $445.00 $900.00 $1450.00
Includes: Procyon 2.1 Procyon 2.1 Procyon 2.1
CLOS = Common Lisp Object System, FFI = Foreign Function Interface, Runtime = Standalone Application Generator, and ID = Interface Designer.
If the Developers version of Procyon sounds intriguing, it should. As you may know, ExperTelligences Interface Designer was the precursor to NeXTs Interface Builder, which is a significant element in why the NeXT computer is so easy to program. CLOS answers my criticisms about Procyon lacking a real object system, and the FFI and Runtime systems are great if youre doing serious commercial work. ExperTelligence tells me Ill receive a review copy of the Developers system once all components of it have cleared customs as they come from the United Kingdom, and I will report on them once I have them. Procyon 2.0 owners should call ExperTelligence at (805) 967-1797 for upgrade information.]
A little bit earlier I mentioned Portable CommonLOOPS in passing. For some time there has been something of a war waging among a variety of object-oriented programming systems that have been written in Lisp. Lisp Machines, Inc. (now defunct) had Object Lisp, which survives in Allegro CL. Symbolics, Inc. and MIT have Flavors, which is offered by a variety of vendors and is available as an option for Allegro CL. Xerox PARC has Portable CommonLOOPS, the current generation of what started as LOOPS, became CommonLOOPS when Common Lisp was defined, and was then made portable in the sense that it has been customized for several of the most popular implementations of Common Lisp, including Allegro CL for the Macintosh.
One reason that I think Portable CommonLOOPS is important is because its a very powerful, flexible object-oriented programming environment that runs very well in Allegro CL. The other, and perhaps more important, reason is that an ANSI committee is standardizing the Common Lisp language, and that standard will include the Common Lisp Object System (CLOS), and each version of Portable CommonLOOPS that is released by Xerox PARC is a little bit closer to CLOS, which means that its a little bit closer to the future. Also, Portable CommonLOOPS is in the public domain, which means the price is right, and you get source code. If youre thinking about writing a large object-oriented program in Common Lisp, think strongly about writing it in Portable CommonLOOPS. That way it will stand a much better chance of surviving the inevitable transition to an ANSI-standard Common Lisp implementation. The Portable CommonLOOPS sources are available via anonymous FTP from a number of UNIX sites on the nets, as well as from APDA (my understanding is that theyll charge $20.00 or so for the distribution).
One problem with Portable CommonLOOPS is that the source code is the only documentation, apart from a handful of release notes that are really only of use if youre already part of the intelligentsia with respect to Portable CommonLOOPS. Luckily, since Portable CommonLOOPS is moving toward being a CLOS implementation, you can largely get away with reading CLOS documentation and applying it to Portable CommonLOOPS (which is the whole point anyway). To that end, I recommend Winston and Horn Third Edition to the person who needs a complete Common Lisp text that devotes some space to CLOS, and Object-Oriented Programming in Common Lisp to the person who already has a working knowledge of Common Lisp but is perhaps a CLOS tyro. This book, by Sonya Keene, takes the reader through most aspects of CLOS from the bare basics to advanced subjects, using examples that actually mean something, such as process locking and implementing I/O streams within CLOS. Keene, an employee of Symbolics, Inc. and a member of the ANSI committee responsible for the CLOS standard and specification, obviously knows her subject well and presents it in a highly coherent manner.
All that has been going on has not necessarily been going on in the Common Lisp world, however. Since I last wrote about Lisp in MacTutor, Semantic Microsystems, developers of MacScheme and MacScheme+Toolsmith, have also been busy. As of this writing, the current version of MacScheme+Toolsmith is 1.51. This version includes support for much of Inside Macintosh Volume V; a few new items in the menus, such as Load , which allows you to load a Scheme file without first opening it and without typing the full pathname in the transcript window; and some new contributed code.
The first contribution is a very nice macro called ExtendSyntax that uses a pattern-matching system to allow the construction of other macros that extend Schemes syntax. This utility is described in The Scheme Programming Language by R. Kent Dybvig. It makes writing fairly complex macros much less painful than it normally is in Scheme. As an example, consider the LET special form. As most Scheme programmers realize, what LET does is to encapsulate the body of the LET in a lexical environment, which it then applies to the values assigned to the names. With EXTEND-SYNTAX, you would express this idea like this:
; Heres how the let macro could have been defined:
((lett ((x e) ...) body ...)
((lambda (x ...) body ...)
In other words, were defining a new piece of syntax, LETT, that can accept some list of pairs (thats what the (x e) means) and some body of expressions (the body ) and will expand to (lambda followed by the list of the first elements of the pairs (that is, the names), followed by all the expressions of the body, followed by all of the second elements of the pairs (that is, the values). In other words,
(lett ((x 3) (y 4))
(+ x y))
would expand to:
(lambda (x y)
(+ x y)
This is just a simple example, but hopefully you can see how expressing macros as syntactical patterns that expand to other syntactical patterns makes macro-writing much easier.
The other major contribution shipped with 1.51 is a MacScheme implementation of Texas Instruments SCOOPS system. This is the object-oriented programming system that TI wrote for and ships with their own PC-Scheme system, and Semantic Microsystems own John Ulrich did the MacScheme version. If youve never tried your hand at object-oriented programming and you own MacScheme+Toolsmith 1.51 or later, take a look at SCOOPS. The MacScheme version leaves some fairly important optimizations out, and unfortunately uses EVAL, which means that standalone applications generated with it will not have any dead code removed, since the routine that finds dead code cant rip anything out lest it be needed for EVALing at run-time. These shortcomings aside, SCOOPS remains a nice example of how object-oriented programming can be implemented, and seems to be a fairly powerful system.
All of this assumes that you have some Scheme experience. However, it sometimes seems that only people who went to MIT or Indiana University to study computer science have even heard of Scheme. Semantic Microsystems must feel the same way--they recently announced the introduction of Scheme Express, a compact Scheme for those who want to learn the language while investing a minimal amount of money. Priced at $69.95, Scheme Express should fit the bill quite nicely. Apparently Scheme Express is extremely small and extremely slow--it will apparently fit into 512K, and will be quite comfortable even on a Mac Plus. It also claims to run at about 25% of the speed of MacScheme, and that compiled Scheme Express programs will be one quarter of the size of compiled MacScheme programs.
My advice to anyone and everyone who is curious about Lisp at all is this: give Semantic Microsystems a call at (503) 643-4539. Chances are pretty good that youll hear the cheerful voice of Kathi Williams at the other end of the line, or perhaps youll speak to John Ulrich (if you do, ask him about object-oriented programming using lexical closures as objects. Then sit back and be prepared to learn a lot). And please tell either of them that I sent you. Then ask about Scheme Express. At $69.95, you cant go too far wrong as far as a learning environment is concerned. If you do get Scheme Express, the next step is to run out and buy a copy of Structure and Interpretation of Computer Programs, by Abelson and Sussman.
Abelson and Sussman is MITs introductory computer-science text. Dont let that scare you off; many students face it every year with no computer programming experience, let alone any Scheme experience. The book is a gem; the principles that it espouses and design theory that it purveys apply regardless of your background or language preferences. The code given happens to be in Scheme, and the exercises are geared toward being implemented in Scheme. So with Scheme Express and this book by your side, you are well-equipped to learn Lisp, and have invested about $100 or so. Thats not unreasonable, even if youre only mildly curious. There is much for me to learn from Abelson and Sussman; you might discover that there is for you, too.
Abelson and Sussman isnt your everyday intro text. Things start off easily enough with Chapter 1, Section 1 having you find square roots using Newtons method.
Section 2 deals extensively with why some algorithms are better than others (better in most cases meaning significantly faster) and has you write a function that tests for the primality of some number. This section is worth paying a lot of attention to--it thoroughly describes the various orders of certain algorithms (O(n2), O(n), O(log n), etc.) and gives you some mental tools with which to reduce the order of your algorithm (that elusive O(log n) algorithm is much faster than that intuitively obvious O(n2) one).
Section 3 introduces the idea that functions are just another kind of data, and in particular that functions can take other functions as parameters, and/or can return functions as their values. This is one of Lisps more unique features as a language--there is no enforced distinction between code and data. Functions can be passed as arguments, returned as values, stored in variables, etc. In chapter 1, a good grasp of procedural abstraction is conveyed in only sixty-seven pages.
Chapter 2 is about data abstraction. Section 1 covers how you might implement arithmetic operators for rational numbers, such as 1/3. A later example describes interval arithmetic, such as is used every day by engineers who use electronic components that have tolerance ranges, such as a resistor that is within 5% of having 10 ohms of resistance.
Section 2 provides a lucid discussion of hierarchical data, such as trees, and covers such seemingly divergent examples as symbolic differentiation, representing sets, and Huffman encoding trees.
Section 3 revolves around the notion that the representation of data should be irrelevant to whatever uses the data, assuming that the published interface remains the same. The example of complex numbers is used; complex numbers can be expressed either in rectangular form (real part and imaginary part) or polar form (magnitude and angle). The user of the complex number system should neither know nor care which representation is being used (in fact, both might be used at different times). A complex arithmetic system is implemented to demonstrate this idea, and is then generalized to demonstrate data-directed programming.
Section 4 deals with defining generic operators that can manipulate data of varying types. The exercises in this section integrate the normal arithmetic functions with the rational and complex ones that you wrote in earlier exercises. Problems of data type coercion are dealt with. The example of adding symbolic algebra (adding and multiplying univariate polynomials) is given, demonstrating that by taking advantage of the generic arithmetic operators, the polynomial functions can operate on polynomials with coefficients of any type known by the generic arithmetic functions. If coercion was added according to a prior exercise, then the functions can also handle polynomials with differing types of coefficients. Rational functions (rational numbers whose numerators and denominators are polynomials) are also added to the generic arithmetic system.
Chapter 3 is perhaps the most interesting chapter in the book, in my opinion. It talks about issues of Modularity, Objects, and State. Section 1, interestingly, talks about assignment. It may seem rather strange to have a textbook not get around to mentioning assignment until 167 pages have gone by, but the thing that most people dont seem to understand about Lisp is that its not one of the so-called procedural languages. It is rather one of the so-called functional languages. Functional doesnt mean that the language does things; it means that everything in Lisp is a function that returns some value. There is an entire branch of computer science devoted to the exploration of this approach to programming (for example, theres an interesting beast called the applicative-order Y-combinator that allows you to write recursive functions without naming them--that is, without using assignment--a task that, on the face of it, seems impossible).
Section 2 explains in depth exactly what environments are, and how they influence how expressions are evaluated, and their relationship to variables and internal definitions.
Section 3 details how mutable data can be used to represent data structures such as queues and tables. The examples given here are among the most interesting in the book, in my opinion. The first is a digital circuit simulator that allows you to build circuits out of AND gates, OR gates, and inverters, specifying a signal delay for each component, and applying probes to various wires and then applying signals to various wires. The probes show the signal and the simulation time at which the signal appeared, so that you can determine, for example, the component configuration(s) that will enable a binary adder to work as far as the delays are concerned. The second example is a system supporting the propagation of constraints through a constraint network. One box might constrain one connector to be the multiple of the other two connectors, for example. Another constrains its single connector to have a constant value, and so on. By building networks of these constraints, arithmetic relationships (as opposed to functions, which are one-way) can be supported. For example, the relationship between Celsius and Fahrenheit temperatures is 9C = 5(F-32). Note that this relationship can be solved algebraically for either C in terms of F or F in terms of C, and in most programming languages, that is exactly what you must do: solve for one variable in terms of the other, and express that as a function in the language. With the constraint system, you can make a network out of two multiplication constraints, an additive constraint, and three constant constraints. You can then put a probe on the two connectors, C and F, and set one or the other to a value. The probes in either case will show the values of C and F as the various constraints are activated. The system is bidirectional; it doesnt matter whether you provide a value for C or F as the probes will provide the correct answer in either case.
Section 4 is where life really gets interesting. The previous three sections have dealt extensively with the concepts underlying assignment and local state, and have employed rather crude, homebrew techniques to implement what amounts to object-oriented programming. Section 4 instead takes advantage of the fact that functions are simply another Lisp data type that can be stored in lists or variables for later use to implement a beautifully strange and strangely beautiful data structure called a stream. This is not a stream in the Common Lisp sense. Rather, an Abelson and Sussman stream is a collection of data that feels very much like a list, but has some special properties. First of all, you must be very careful about using assignment in the presence of streams. That is, a stream is not mutable in the same way that a list is. Secondly, streams can represent some data structures much more efficiently than a list can, in the sense that not all of its elements have been evaluated (Ill say more about this later). For this same reason, a stream can represent some infinitely-long data structures that cannot be represented as lists at all. A good example of this last ability would be the stream of all prime numbers, which isnt difficult to generate as a stream, but is quite impossible to generate as a list.
Chapter 4 conveys the importance and power of metalinguistic abstraction. That is, it points out that oftentimes the best approach to using a programming language to solve a problem is to use that language to implement another language in which the problem can be trivially solved. Previous applications of this idea (the language for simulating digital circuits and the language for expressing networks of constraints) are pointed out. In Section 1, Schemes own evaluator is rewritten in Scheme.
In Section 2, this new evaluator is modified to examine the effects of using normal-order evaluation as opposed to applicative-order evaluation, as well as to examine the results of using various methods of binding values to variables.
Section 3 discusses packages, which are environments that distinguish among variables having the same name. Their implementation and use to prevent name conflicts in a generic arithmetic system is explained.
Section 4 shows how metalinguistic abstraction can be used to create a programming language that follows an entirely different paradigm than Lisps. Logic programming, also known as declarative programming, is discussed. Much attention is paid to the issues surrounding logical expressions such as AND, OR, and NOT (not to be confused with the Lisp primitives of the same names) and their differences from mathematical logic.
Section 5 details the logical query system, and goes into great depth about pattern matching, rules, and unification (a generalization of pattern matching that makes the application of rules possible). Again the emphasis is on the usefulness of being able to implement a new language in Lisp that can then be used to solve the original problem.
Chapter 5, ironically, delves from what might seem the extreme high end of the programming spectrum--using a language to implement another language--to what might seem the low end of that same spectrum, namely creating processes built around the concept of having a certain number of registers, a handful of simple instructions, and a stack. Section 1 covers the simulation of an arbitrary register machine.
Section 2 describes the explicit-control evaluator. This differs from the evaluator of chapter 4 in that rather than being written metacircularly (that is, rather than being a Scheme evaluator written in Scheme) it is written in the language of the simulated register machine. Issues such as sequence evaluation and tail recursion optimization are covered, as are the handling of special forms, such as conditionals. A sample run of the evaluator is discussed, and then the issue of internal definitions is touched upon.
Section 3 discusses the compilation of Scheme programs for the simulated register machine. Compiling expressions, compiler data structures, code generation, interfacing to the evaluator, and lexical addressing are all presented in some depth.
Finally, Section 4 reveals some of the secrets of Lisps dynamic memory management, discussing how memory is allocated and how unused memory is automagically returned to the heap for later reuse.
The scope of Structure and Interpretation of Computer Programs may seem a bit daunting at first, but with a real Scheme such as Scheme Express in front of you, the secrets within this text come tumbling out at a sometimes alarming rate, and there is a great deal of pleasure to be had from mastering some aspect of programming that had eluded you before (at least, that has been my experience).
When talking about chapter 3, I mentioned streams in passing and promised to talk about them in more detail, which I will do, because you can do some pretty interesting things with them. Since Im going to implement them from top to bottom in Common Lisp, Id better call them lazy-lists instead of streams, since Common Lisp uses the term stream to describe an I/O sink. First of all, an important thing to understand is that lazy-lists are generally used to model some situation where there is a flow of data from point A to point B. As a result, there are a handful of standard operations that you might like to be able to apply to the contents of any given lazy-list. For example, there may be some function that you wish to apply to every element of the lazy-list. Similarly, there may be elements of some lazy-list that you wish to filter out of that lazy-list if they satisfy some predicate. It is also quite common to take the elements of a lazy-list and accumulate them together somehow. Another common function is to append one lazy-list to another.
Lets define the lazy-list data structure and primitive operations first. Then we can define these higher-order operations on lazy-lists. A lazy-list is nothing more than a list of two elements. The first element is anything; the second element is a function that, when evaluated, returns another lazy-list consisting of the next element in the lazy-list followed by a function that returns the next lazy-list, etc. At any point along this process, the function in question could actually be the-empty-lazy-list, signifying the end of the lazy-list (although, as has been noted, this is not always necessary; some lazy-lists are infinitely long).
Lazy-lists are built using something very much like CONS. In fact, well call the function LAZY-CONS. The different between LAZY-CONS and CONS is that LAZY-CONS doesnt immediately evaluate its second parameter. Instead it delays it. DELAY is a standard part of Scheme, but it isnt in most Lisps. In Common Lisp, you would implement DELAY something like this:
(defun memoize (fun)
(let ((already-evaled nil) (value nil))
(setf already-evaled t)
(setf value (funcall fun)))))))
(defmacro delay (thing)
(memoize #(lambda () ,thing)))
This is actually an optimized delay; the MEMOIZE function is a bizarre little thing that takes a function as a parameter and wraps it up in a neat little package that basically determines whether the function has ever been called before. If it hasnt, it calls it and preserves (and returns) its returned value. If it has, it simply returns the stored value. In other words, any function that we pass to MEMOIZE will only get called once, assuming that what is actually called is the function that MEMOIZE returns.
DELAY, then, is a macro (so it doesnt evaluate thing, which would defeat the whole purpose) that first makes a function that, when called, returns thing as its value (by wrapping thing in a lambda expression of no arguments), then passes that function to MEMOIZE, so that thing only gets evaluated when it absolutely has to, and even then only once.
Building a lazy-list, then, would look something like this:
;;Here is our lazy-list CONStructor:
(defmacro lazy-cons (thing lazy-list)
(cons ,thing (delay ,lazy-list)))
The flip side of building a lazy-list, of course, is accessing its elements. Since its like a list, well write LAZY-CAR and LAZY-CDR to get at its components. LAZY-CAR is very straightforward:
;;Here is LAZY-CAR:
(defun lazy-car (lazy-list)
LAZY-CDR is only slightly more involved, since the remaining item in the lazy-list data structure has been delayed:
;;Here is LAZY-CDR:
(defun lazy-cdr (lazy-list)
(force (cdr lazy-list)))
FORCE is used to return the value of the cdr of the lazy-list, and thanks to MEMOIZE, it will do so whether its the first time or the umpteenth time without re-evaluating the original object every time. Since the cdr of a lazy-list is a function, FORCE is simply:
(defun force (promise)
With these functions, we have the bare-bones basics of the lazy-list manipulation functions. First lets define a couple of useful items:
;;Define the empty lazy list:
(defconstant the-empty-lazy-list ())
;;How do we know if a lazy-list is empty?
(defun empty-lazy-list-p (lazy-list)
(eq the-empty-lazy-list lazy-list))
Once weve defined these, for the sake of termination testing, formulating the remaining higher-order lazy-list functions isnt too difficult. For example, appending lazy-list l2 to lazy-list l1 works like this:
;;This is to lazy lists what Common Lisps APPEND is to normal lists.
(defun append-lazy-lists (l1 l2)
(if (empty-lazy-list-p l1)
(lazy-cons (lazy-car l1)
(append-lazy-lists (lazy-cdr l1) l2))))
Accumulating the elements of a lazy-list looks like this:
;; This is a nice, generic accumulation function that takes a combiner
;; function (usually #+ or #cons or something like that), an initial
;; value (typically 0 or 1 for numeric accumulations or () for lists)
;; and some lazy-list to apply all of this to.
(defun accumulate (combiner initial-value lazy-list)
(if (empty-lazy-list-p lazy-list)
(funcall combiner (lazy-car lazy-list)
(delay (accumulate combiner
;;This function prevents infinite recursion when accumulating infinite
(defun interleave (l1 l2)
(if (empty-lazy-list-p l1)
(lazy-cons (lazy-car l1)
(interleave (force l2) (delay (lazy-cdr l1))))))
There are times when what you have is a lazy-list of lazy-lists, and you want to reduce that to a lazy-list of non-lazy-list elements. Using ACCUMULATE and INTERLEAVE, both defined above, its easy:
;;This handy thing uses ACCUMULATE to flatten a lazy-list of lazy-lists.
(defun flatten (lazy-list)
(accumulate #interleave the-empty-lazy-list lazy-list))
There are many times when you want a lazy-list that has been constructed by applying some function to every element of some other lazy-list. Thats what LAZY-MAP is for:
;;This maps some proc across every element of some lazy-list.
(defun lazy-map (proc lazy-list)
(if (empty-lazy-list-p lazy-list)
(lazy-cons (funcall proc (lazy-car lazy-list))
(lazy-map proc (lazy-cdr lazy-list)))))
Its also sometimes desirable to remove all elements from a lazy-list that dont pass some test. FILTER is the function that does that:
;;This returns the lazy-list that contains all items that, when passed
;;pred, return something non-NIL.
(defun filter (pred lazy-list)
(cond ((empty-lazy-list-p lazy-list) the-empty-lazy-list)
((funcall pred (lazy-car lazy-list))
(lazy-cons (lazy-car lazy-list)
(filter pred (lazy-cdr lazy-list))))
(t (filter pred (lazy-cdr lazy-list)))))
The next function is useful on those occasions that you need to apply a function to a lazy-list that side-effects the lazy-list, rather than producing a new lazy-list:
;;This is an awful lot like LAZY-MAP, except that it doesnt accumulate
;;results, which is a fancy way of saying that you use LAZY-MAP if you
;;a function result and FOR-EACH if you need side-effects.
(defun for-each (proc lazy-list)
(if (empty-lazy-list-p lazy-list)
(progn (funcall proc (lazy-car lazy-list))
(for-each proc (lazy-cdr lazy-list)))))
Finally, theres FLATMAP, which is just the combination of FLATTEN and LAZY-MAP. LAZY-MAP often produces a lazy-list of lazy-lists that needs to be flattened, so this is just some syntactic sugar to make that easier:
;;Flattening the result of lazy-mapping is so useful and so common that
;;theres a whole separate function for it.
(defun flatmap (f s)
(flatten (lazy-map f s)))
Once in a great while, its nice to convert a regular list to a lazy-list:
;;Sometimes (ok, rarely) its nice to convert a list to a lazy-list:
(defun lazy-list (list)
(if (null list)
(lazy-cons (car list) (lazy-list (cdr list)))))
Another fairly common operation is nested mapping. That is, several lazy-lists are generated and some function is mapped across them, and the results are accumulated according to some other function. Its worth defining a fairly complex macro, COLLECT, that will make code that does this easier to read, because the expanded code can be pretty illegible, and contrary to popular belief, good Lisp programmers do care about their code being readable! The goal of the COLLECT macro is to take expressions of the form:
and expand them to:
(lazy-map #(lambda (tuple)
(let ((<v1> (car tuple))...(<vn> (ca...dr tuple)))
(filter #(lambda (tuple)
(let ((<v1> (car tuple))
(<vn> (ca...dr tuple)))
(lazy-map #(lambda (<vn>)
Its probably not too hard to see why youd want a macro like COLLECT to create these kinds of structures. Abelson and Sussman doesnt list this macro; its left as an exercise for the reader. Heres my Common Lisp solution to that exercise:
;;This is the tricky one. The COLLECT macro makes nested mappings a
;;easier than they would be otherwise, but this is the most complex macro
;;Ive ever had to write. Here goes nothing:
(defmacro collect (result pairs &optional (restriction t))
(let ((vars (mapcar #car pairs))
(sets (mapcar #cadr pairs))
(lets (genlets pairs)))
(lazy-map #(lambda (tuple)
(filter #(lambda (tuple)
,(genmaps vars sets)))))
;;Given a list of pairs, this creates a let body based on tuple.
(defun genlets (pairs)
(do ((i (1- (length pairs)) (1- i))
(result () (cons (cons (car (nth i pairs))
(list (list nth i tuple))) result)))
((< i 0) result)))
;;This beast generates the flatmap/lazy-map sequence for the vars and
(defun genmaps (vars sets)
(labels ((genmaps-1 (vars sets depth)
(if (null (cdr sets))
(lazy-map #(lambda (,(car (last vars)))
(flatmap #(lambda (,(nth depth vars))
,(genmaps-1 vars (cdr sets) (1+ depth)))
(genmaps-1 vars sets 0)))
Infinitely long lazy-lists can be both interesting and useful:
;;An infinite lazy-list of 1s:
(defconstant ones (lazy-cons 1 ones))
;;A function to add two lazy-lists together:
(defun add-lazy-lists (l1 l2)
(cond ((empty-lazy-list-p l1) l2)
((empty-lazy-list-p l2) l1)
(lazy-cons (+ (lazy-car l1) (lazy-car l2))
(add-lazy-lists (lazy-cdr l1) (lazy-cdr l2))))))
;;The lazy-list of all integers:
(defconstant integers (lazy-cons 1 (add-lazy-lists ones integers)))
;;A function to scale all elements of a lazy-list by some constant:
(defun scale-lazy-list (c lazy-list)
(lazy-map #(lambda (x) (* x c)) lazy-list))
There are many interesting things that can be done with these functions and data structures. It isnt terribly difficult, for example, to define the lazy-list of all prime numbers:
;;Heres the sieve of Eratosthenes written using lazy-list functions:
(defun sieve (lazy-list)
#(lambda (x) (not (divisiblep x (lazy-car lazy-list))))
(defconstant primes (sieve (lazy-cdr integers)))
Theoretically, all prime numbers can be easily gotten from this data structure. In actual practice youre likely to run out of memory asking for large ones. Speaking of asking for elements of lazy-lists, a LAZY-NTH function would be very helpful:
;;Like NTH for normal lists, except lazy:
(defun lazy-nth (n lazy-list)
(if (= n 0)
(lazy-nth (1- n) (lazy-cdr lazy-list))))
The hairy COLLECT macro that I wrote is used for nested mappings, as I mentioned above. One simple example is generating the lazy-list of all triples that represent wins on a tic-tac-toe board thats numbered like this:
This tic-tac-toe board is actually a magic square--the numbers in all of the win directions add up to fifteen. So what were interested in generating is the lazy-list of all distinct positive integers i, j, and k less than or equal to n that sum to s where n is 9 and s is 15. Lets write triples using collect:
;;Find all triples of i, j, and k <= n that sum to s.
(defun triples (n s)
(collect (list i j k)
((i (enumerate-interval 1 n))
(j (enumerate-interval 1 (1- i)))
(k (enumerate-interval 1 (1- j))))
(= (+ i j k) s)))
ENUMERATE-INTERVAL simply returns the lazy-list of integers from its first parameter to its second parameter, inclusive. It looks like this:
;;Return the lazy-list of integers from low to high:
(defun enumerate-interval (low high)
(if (> low high)
(lazy-cons low (enumerate-interval (1+ low) high))))
As you can see, lazy-lists can be a powerful medium for expressing some interesting mathematical and logical ideas. If there is interest, I can explore more uses in future articles. One other example could be a lazy-list-based solution to the classic eight-queens problem. Another, somewhat more practical, idea would be to discuss why lazy-lists are useful in the design and implementation of a forward- and backward-chaining inference engine for deductive information retrieval. As always, I love to hear from you readers what you want to hear, so please feel free to write or call!
Thats all for this time. Enjoy!