TweetFollow Us on Twitter

Lazy Lists
Volume Number:5
Issue Number:10
Column Tag:Lisp Listener

(Mumble Paul(About Lisp))

By Paul Snively, Wheeling, IL

Note: Source code files accompanying article are located on MacTech CD-ROM or source code disks.

It’s been a while since anyone has written anything about any dialect of Lisp or, for that matter, anything that has anything to do with Lisp in MacTutor. This might lead one to the conclusion that nothing of any importance is going on in the Lisp world, or with Lisp as it relates specifically to the Macintosh. This is somewhat saddening, because nothing could be further from the truth.

The most obvious and most visible thing that’s happened in the Lisp world relative to the Macintosh is that Apple Computer, Inc. purchased the assets of Coral Software, Inc. in a move that appears to have surprised several people, myself included. It’s quite rare for Apple to out-and-out buy a third-party software-development house; my interpretation of the acquisition is that it serves to prove Apple’s corporate commitment to many different kinds of artificial intelligence research with respect to future system architectures, and it also serves to prove that Apple uses Allegro CL on the Macintosh a great deal internally (Apple seems to be a stickler for having control of the development systems that they themselves use internally, which is why MPW exists in the first place).

I won’t go into great detail in this space about Allegro CL [for details, see MacTutor, March 1988]. Suffice it to say that the current version, as of this writing, is 1.2.2, and that when it ships through APDA it will include the base language, the Foreign Function Interface that will allow linking MPW C or Pascal code into your Lisp program, and the Standalone-Application Generator that will allow you to turnkey your Lisp program. Also available from APDA for twenty dollars will be the current version of Portable CommonLOOPS, about which I’ll have more to say a bit later.

The other big news in the Common Lisp community on the Macintosh is that ExperTelligence, Inc. has licensed Procyon Common Lisp from Procyon Research, Ltd. in the United Kingdom for distribution in the United States of America. The Procyon implementation has a great deal going for it, not the least of which is an almost religious adherence to the specification of Common Lisp set forth in Steele’s Common LISP: The Language, hereafter called CLtL, which is the canonical description of the language published in 1984. This is a two-edged sword: there is a fair amount of Common Lisp source out there that presumes a somewhat more liberal interpretation of CLtL than the Procyon implementation supports. This is rarely, if ever, damaging (the only example I’ve come across in actual practice is that FIND-SYMBOL wants an honest-to-goodness package object as its optional second parameter, whereas Allegro CL, and apparently most Common Lisp implementations, will allow a package name to be passed).

Another thing that can be considered a plus of the Procyon system is that it seems to know a fair amount about the Macintosh as far as data structures are concerned. For example, Procyon includes a pretty nifty structure editor. A structure editor, as opposed to a text editor, deals with Lisp data structures in the heap, rather than their textual representation in a file. One of the interesting things that it does is to treat bit-vectors, one of Common Lisp’s data types, as bitmaps, providing a fatbits-style editor to modify the contents of the bit-vector. It’s touches like this that make the Procyon system a pleasure to use.

In terms of system performance, Procyon and Allegro subjectively seem very close. Both companies, upon hearing that I intended to write about the two systems, made comments about how performance could be tweaked out of their systems, but I’m going to adopt here the attitude that there are really only two levels of performance for any given system: good enough and not good enough. It’s a subjective measurement anyway; FullWrite Professional’s performance is “good enough” to me, but I know a great many people who do not use it precisely because its performance is “not good enough” to them. To me, both Allegro CL and Procyon Common Lisp perform well enough.

Another thing about Procyon that you may or may not appreciate is the fact that it comes with a two-volume set of documentation, and that Procyon Common Lisp’s DOCUMENTATION function is indexed to the printed documentation (although, happily, lambda lists for functions are still available online). It’s a good thing that Procyon’s documentation is so comprehensive and large; the implementation offers quite a few extensions above and beyond CLtL’s standard definition.

Procyon includes several examples, both of generic Common Lisp programming and of several of the Procyon-specific extensions, such as the so-called “Common Graphics” system that implements Procyon’s interface to the Macintosh Toolbox and OS. Oddly, Procyon, an ostensibly complete Common Lisp implementation, does not ship with CLtL, a must for any Common Lisp programmer, nor does it ship with any of the standard Common Lisp-based texts such as Winston and Horn. It’s my opinion that this somewhat mars the educational value of an otherwise well-documented system.

In the 2.0 release that I received for evaluation, Procyon’s text editor is limited to a maximum of 32K, a limitation that both Procyon and ExperTelligence assure me will be removed, if it hasn’t been already by the time of this writing, or by the time it reaches the shelves. I’m also told that some items, such as a file compiler and an implementation of the forthcoming Common Lisp Object System (CLOS), are under development and will either be integrated into the system or available separately for a modest charge.

Speaking of CLOS, Procyon 2.0 does not ship with any kind of object system, meaning that it can’t easily be extended to support new types of OS and/or Toolbox data structures and functions. In fact, there is a fair amount of documentation given describing how to create custom stream types, since most Common Lisp I/O revolves around streams. Since there is no object system, adding new stream types seems needlessly complex and unwieldy, at least when compared to Allegro CL, which allows you to define subclasses of the basic *STREAM* class and then simply define a few methods to have a new stream type (a good example of this is given in Allegro’s Serial-Streams.Lisp source file, which defines a stream type that allows Allegro CL to do stream-oriented I/O through the serial ports).

Both Allegro CL and Procyon Common Lisp are good, solid, reliable implementations of the Common Lisp standard, a standard which seems to be of increasing importance in the world at large and the Macintosh world in particular. Both will at least consider running on a two-megabyte machine, although realism dictates at least four to get anything accomplished, and, as with all systems, eight is something of a dream. Procyon offers a few more tools, such as the structure editor, and seems somehow “Macier.” Allegro offers an object system, easier Toolbox and OS access, and feels “Lispier.” Allegro CL also now benefits somewhat from being supported by Apple Computer, Inc. Individual preference will be the final arbiter in terms of which system you find yourself using.

[In the late-breaking news department, ExperTelligence will have Procyon Common Lisp 2.1 available by the time you read this. 2.1 will remove the limitations noted above, and as a product line is divided into three different levels, as shown:

Personal Professional Developer

Retail: $620.00 $1250.00 $2000.00

Education: $445.00 $900.00 $1450.00

Includes: Procyon 2.1 Procyon 2.1 Procyon 2.1





CLOS = Common Lisp Object System, FFI = Foreign Function Interface, Runtime = Standalone Application Generator, and ID = Interface Designer.

If the Developer’s version of Procyon sounds intriguing, it should. As you may know, ExperTelligence’s Interface Designer was the precursor to NeXT’s Interface Builder, which is a significant element in why the NeXT computer is so easy to program. CLOS answers my criticisms about Procyon lacking a real object system, and the FFI and Runtime systems are great if you’re doing serious commercial work. ExperTelligence tells me I’ll receive a review copy of the Developer’s system once all components of it have cleared customs as they come from the United Kingdom, and I will report on them once I have them. Procyon 2.0 owners should call ExperTelligence at (805) 967-1797 for upgrade information.]

A little bit earlier I mentioned Portable CommonLOOPS in passing. For some time there has been something of a war waging among a variety of object-oriented programming systems that have been written in Lisp. Lisp Machines, Inc. (now defunct) had Object Lisp, which survives in Allegro CL. Symbolics, Inc. and MIT have Flavors, which is offered by a variety of vendors and is available as an option for Allegro CL. Xerox PARC has Portable CommonLOOPS, the current generation of what started as LOOPS, became CommonLOOPS when Common Lisp was defined, and was then made “portable” in the sense that it has been customized for several of the most popular implementations of Common Lisp, including Allegro CL for the Macintosh.

One reason that I think Portable CommonLOOPS is important is because it’s a very powerful, flexible object-oriented programming environment that runs very well in Allegro CL. The other, and perhaps more important, reason is that an ANSI committee is standardizing the Common Lisp language, and that standard will include the Common Lisp Object System (CLOS), and each version of Portable CommonLOOPS that is released by Xerox PARC is a little bit closer to CLOS, which means that it’s a little bit closer to the future. Also, Portable CommonLOOPS is in the public domain, which means the price is right, and you get source code. If you’re thinking about writing a large object-oriented program in Common Lisp, think strongly about writing it in Portable CommonLOOPS. That way it will stand a much better chance of surviving the inevitable transition to an ANSI-standard Common Lisp implementation. The Portable CommonLOOPS sources are available via anonymous FTP from a number of UNIX sites on the nets, as well as from APDA (my understanding is that they’ll charge $20.00 or so for the distribution).

One problem with Portable CommonLOOPS is that the source code is the only documentation, apart from a handful of release notes that are really only of use if you’re already part of the “intelligentsia” with respect to Portable CommonLOOPS. Luckily, since Portable CommonLOOPS is moving toward being a CLOS implementation, you can largely get away with reading CLOS documentation and applying it to Portable CommonLOOPS (which is the whole point anyway). To that end, I recommend Winston and Horn Third Edition to the person who needs a complete Common Lisp text that devotes some space to CLOS, and Object-Oriented Programming in Common Lisp to the person who already has a working knowledge of Common Lisp but is perhaps a CLOS tyro. This book, by Sonya Keene, takes the reader through most aspects of CLOS from the bare basics to advanced subjects, using examples that actually mean something, such as process locking and implementing I/O streams within CLOS. Keene, an employee of Symbolics, Inc. and a member of the ANSI committee responsible for the CLOS standard and specification, obviously knows her subject well and presents it in a highly coherent manner.

All that has been going on has not necessarily been going on in the Common Lisp world, however. Since I last wrote about Lisp in MacTutor, Semantic Microsystems, developers of MacScheme and MacScheme+Toolsmith, have also been busy. As of this writing, the current version of MacScheme+Toolsmith is 1.51. This version includes support for much of Inside Macintosh Volume V; a few new items in the menus, such as “Load ,” which allows you to load a Scheme file without first opening it and without typing the full pathname in the transcript window; and some new contributed code.

The first contribution is a very nice macro called ExtendSyntax that uses a pattern-matching system to allow the construction of other macros that extend Scheme’s syntax. This utility is described in The Scheme Programming Language by R. Kent Dybvig. It makes writing fairly complex macros much less painful than it normally is in Scheme. As an example, consider the LET special form. As most Scheme programmers realize, what LET does is to encapsulate the body of the LET in a lexical environment, which it then applies to the values assigned to the names. With EXTEND-SYNTAX, you would express this idea like this:

; Here’s how the let macro could have been defined:

(extend-syntax (lett)
               ((lett ((x e) ...) body ...)
                ((lambda (x ...) body ...)
                 e ...)))

In other words, we’re defining a new piece of syntax, LETT, that can accept some list of pairs (that’s what the “(x e) ” means) and some body of expressions (the “body ”) and will expand to “(lambda” followed by the list of the first elements of the pairs (that is, the names), followed by all the expressions of the body, followed by all of the second elements of the pairs (that is, the values). In other words,

(lett ((x 3) (y 4))
 (+ x y))

would expand to:

(lambda (x y)
 (+ x y)
 3 4)

This is just a simple example, but hopefully you can see how expressing macros as syntactical patterns that expand to other syntactical patterns makes macro-writing much easier.

The other major contribution shipped with 1.51 is a MacScheme implementation of Texas Instruments’ SCOOPS system. This is the object-oriented programming system that TI wrote for and ships with their own PC-Scheme system, and Semantic Microsystems’ own John Ulrich did the MacScheme version. If you’ve never tried your hand at object-oriented programming and you own MacScheme+Toolsmith 1.51 or later, take a look at SCOOPS. The MacScheme version leaves some fairly important optimizations out, and unfortunately uses EVAL, which means that standalone applications generated with it will not have any dead code removed, since the routine that finds dead code can’t rip anything out lest it be needed for EVALing at run-time. These shortcomings aside, SCOOPS remains a nice example of how object-oriented programming can be implemented, and seems to be a fairly powerful system.

All of this assumes that you have some Scheme experience. However, it sometimes seems that only people who went to MIT or Indiana University to study computer science have even heard of Scheme. Semantic Microsystems must feel the same way--they recently announced the introduction of Scheme Express, a compact Scheme for those who want to learn the language while investing a minimal amount of money. Priced at $69.95, Scheme Express should fit the bill quite nicely. Apparently Scheme Express is extremely small and extremely slow--it will apparently fit into 512K, and will be quite comfortable even on a Mac Plus. It also claims to run at about 25% of the speed of MacScheme, and that compiled Scheme Express programs will be one quarter of the size of compiled MacScheme programs.

My advice to anyone and everyone who is curious about Lisp at all is this: give Semantic Microsystems a call at (503) 643-4539. Chances are pretty good that you’ll hear the cheerful voice of Kathi Williams at the other end of the line, or perhaps you’ll speak to John Ulrich (if you do, ask him about object-oriented programming using lexical closures as objects. Then sit back and be prepared to learn a lot). And please tell either of them that I sent you. Then ask about Scheme Express. At $69.95, you can’t go too far wrong as far as a learning environment is concerned. If you do get Scheme Express, the next step is to run out and buy a copy of Structure and Interpretation of Computer Programs, by Abelson and Sussman.

Abelson and Sussman is MIT’s introductory computer-science text. Don’t let that scare you off; many students face it every year with no computer programming experience, let alone any Scheme experience. The book is a gem; the principles that it espouses and design theory that it purveys apply regardless of your background or language preferences. The code given happens to be in Scheme, and the exercises are geared toward being implemented in Scheme. So with Scheme Express and this book by your side, you are well-equipped to learn Lisp, and have invested about $100 or so. That’s not unreasonable, even if you’re only mildly curious. There is much for me to learn from Abelson and Sussman; you might discover that there is for you, too.

Abelson and Sussman isn’t your everyday intro text. Things start off easily enough with Chapter 1, Section 1 having you find square roots using Newton’s method.

Section 2 deals extensively with why some algorithms are “better” than others (“better” in most cases meaning significantly faster) and has you write a function that tests for the primality of some number. This section is worth paying a lot of attention to--it thoroughly describes the various “orders” of certain algorithms (O(n2), O(n), O(log n), etc.) and gives you some mental tools with which to reduce the order of your algorithm (that elusive O(log n) algorithm is much faster than that intuitively obvious O(n2) one).

Section 3 introduces the idea that functions are just another kind of data, and in particular that functions can take other functions as parameters, and/or can return functions as their values. This is one of Lisp’s more unique features as a language--there is no enforced distinction between code and data. Functions can be passed as arguments, returned as values, stored in variables, etc. In chapter 1, a good grasp of procedural abstraction is conveyed in only sixty-seven pages.

Chapter 2 is about data abstraction. Section 1 covers how you might implement arithmetic operators for rational numbers, such as 1/3. A later example describes interval arithmetic, such as is used every day by engineers who use electronic components that have tolerance ranges, such as a resistor that is within 5% of having 10 ohms of resistance.

Section 2 provides a lucid discussion of hierarchical data, such as trees, and covers such seemingly divergent examples as symbolic differentiation, representing sets, and Huffman encoding trees.

Section 3 revolves around the notion that the representation of data should be irrelevant to whatever uses the data, assuming that the published interface remains the same. The example of complex numbers is used; complex numbers can be expressed either in rectangular form (real part and imaginary part) or polar form (magnitude and angle). The user of the complex number system should neither know nor care which representation is being used (in fact, both might be used at different times). A complex arithmetic system is implemented to demonstrate this idea, and is then generalized to demonstrate data-directed programming.

Section 4 deals with defining generic operators that can manipulate data of varying types. The exercises in this section integrate the normal arithmetic functions with the rational and complex ones that you wrote in earlier exercises. Problems of data type coercion are dealt with. The example of adding symbolic algebra (adding and multiplying univariate polynomials) is given, demonstrating that by taking advantage of the generic arithmetic operators, the polynomial functions can operate on polynomials with coefficients of any type known by the generic arithmetic functions. If coercion was added according to a prior exercise, then the functions can also handle polynomials with differing types of coefficients. Rational functions (rational numbers whose numerators and denominators are polynomials) are also added to the generic arithmetic system.

Chapter 3 is perhaps the most interesting chapter in the book, in my opinion. It talks about issues of “Modularity, Objects, and State.” Section 1, interestingly, talks about assignment. It may seem rather strange to have a textbook not get around to mentioning assignment until 167 pages have gone by, but the thing that most people don’t seem to understand about Lisp is that it’s not one of the so-called “procedural” languages. It is rather one of the so-called “functional” languages. “Functional” doesn’t mean that the language does things; it means that everything in Lisp is a function that returns some value. There is an entire branch of computer science devoted to the exploration of this approach to programming (for example, there’s an interesting beast called the applicative-order Y-combinator that allows you to write recursive functions without naming them--that is, without using assignment--a task that, on the face of it, seems impossible).

Section 2 explains in depth exactly what “environments” are, and how they influence how expressions are evaluated, and their relationship to variables and internal definitions.

Section 3 details how mutable data can be used to represent data structures such as queues and tables. The examples given here are among the most interesting in the book, in my opinion. The first is a digital circuit simulator that allows you to build circuits out of AND gates, OR gates, and inverters, specifying a signal delay for each component, and applying probes to various wires and then applying signals to various wires. The probes show the signal and the simulation time at which the signal appeared, so that you can determine, for example, the component configuration(s) that will enable a binary adder to work as far as the delays are concerned. The second example is a system supporting the propagation of constraints through a constraint network. One box might constrain one connector to be the multiple of the other two connectors, for example. Another constrains its single connector to have a constant value, and so on. By building networks of these constraints, arithmetic relationships (as opposed to functions, which are one-way) can be supported. For example, the relationship between Celsius and Fahrenheit temperatures is 9C = 5(F-32). Note that this relationship can be solved algebraically for either C in terms of F or F in terms of C, and in most programming languages, that is exactly what you must do: solve for one variable in terms of the other, and express that as a function in the language. With the constraint system, you can make a network out of two multiplication constraints, an additive constraint, and three constant constraints. You can then put a probe on the two connectors, C and F, and set one or the other to a value. The probes in either case will show the values of C and F as the various constraints are activated. The system is bidirectional; it doesn’t matter whether you provide a value for C or F as the probes will provide the correct answer in either case.

Section 4 is where life really gets interesting. The previous three sections have dealt extensively with the concepts underlying assignment and local state, and have employed rather crude, “homebrew” techniques to implement what amounts to object-oriented programming. Section 4 instead takes advantage of the fact that functions are simply another Lisp data type that can be stored in lists or variables for later use to implement a beautifully strange and strangely beautiful data structure called a stream. This is not a “stream” in the Common Lisp sense. Rather, an Abelson and Sussman stream is a collection of data that “feels” very much like a list, but has some special properties. First of all, you must be very careful about using assignment in the presence of streams. That is, a stream is not mutable in the same way that a list is. Secondly, streams can represent some data structures much more efficiently than a list can, in the sense that not all of its elements have been evaluated (I’ll say more about this later). For this same reason, a stream can represent some infinitely-long data structures that cannot be represented as lists at all. A good example of this last ability would be the stream of all prime numbers, which isn’t difficult to generate as a stream, but is quite impossible to generate as a list.

Chapter 4 conveys the importance and power of metalinguistic abstraction. That is, it points out that oftentimes the best approach to using a programming language to solve a problem is to use that language to implement another language in which the problem can be trivially solved. Previous applications of this idea (the language for simulating digital circuits and the language for expressing networks of constraints) are pointed out. In Section 1, Scheme’s own evaluator is rewritten in Scheme.

In Section 2, this new evaluator is modified to examine the effects of using normal-order evaluation as opposed to applicative-order evaluation, as well as to examine the results of using various methods of binding values to variables.

Section 3 discusses packages, which are environments that distinguish among variables having the same name. Their implementation and use to prevent name conflicts in a generic arithmetic system is explained.

Section 4 shows how metalinguistic abstraction can be used to create a programming language that follows an entirely different paradigm than Lisp’s. Logic programming, also known as declarative programming, is discussed. Much attention is paid to the issues surrounding logical expressions such as AND, OR, and NOT (not to be confused with the Lisp primitives of the same names) and their differences from mathematical logic.

Section 5 details the logical query system, and goes into great depth about pattern matching, rules, and unification (a generalization of pattern matching that makes the application of rules possible). Again the emphasis is on the usefulness of being able to implement a new language in Lisp that can then be used to solve the original problem.

Chapter 5, ironically, delves from what might seem the extreme high end of the programming spectrum--using a language to implement another language--to what might seem the low end of that same spectrum, namely creating processes built around the concept of having a certain number of registers, a handful of simple instructions, and a stack. Section 1 covers the simulation of an arbitrary “register machine.”

Section 2 describes the “explicit-control” evaluator. This differs from the evaluator of chapter 4 in that rather than being written metacircularly (that is, rather than being a Scheme evaluator written in Scheme) it is written in the language of the simulated register machine. Issues such as sequence evaluation and tail recursion optimization are covered, as are the handling of special forms, such as conditionals. A sample run of the evaluator is discussed, and then the issue of internal definitions is touched upon.

Section 3 discusses the compilation of Scheme programs for the simulated register machine. Compiling expressions, compiler data structures, code generation, interfacing to the evaluator, and lexical addressing are all presented in some depth.

Finally, Section 4 reveals some of the secrets of Lisp’s dynamic memory management, discussing how memory is allocated and how unused memory is automagically returned to the heap for later reuse.

The scope of Structure and Interpretation of Computer Programs may seem a bit daunting at first, but with a real Scheme such as Scheme Express in front of you, the secrets within this text come tumbling out at a sometimes alarming rate, and there is a great deal of pleasure to be had from mastering some aspect of programming that had eluded you before (at least, that has been my experience).

When talking about chapter 3, I mentioned streams in passing and promised to talk about them in more detail, which I will do, because you can do some pretty interesting things with them. Since I’m going to implement them from top to bottom in Common Lisp, I’d better call them lazy-lists instead of streams, since Common Lisp uses the term “stream” to describe an I/O sink. First of all, an important thing to understand is that lazy-lists are generally used to model some situation where there is a flow of data from point A to point B. As a result, there are a handful of standard operations that you might like to be able to apply to the contents of any given lazy-list. For example, there may be some function that you wish to apply to every element of the lazy-list. Similarly, there may be elements of some lazy-list that you wish to filter out of that lazy-list if they satisfy some predicate. It is also quite common to take the elements of a lazy-list and accumulate them together somehow. Another common function is to append one lazy-list to another.

Let’s define the lazy-list data structure and primitive operations first. Then we can define these higher-order operations on lazy-lists. A lazy-list is nothing more than a list of two elements. The first element is anything; the second element is a function that, when evaluated, returns another lazy-list consisting of the next element in the lazy-list followed by a function that returns the next lazy-list, etc. At any point along this process, the function in question could actually be the-empty-lazy-list, signifying the end of the lazy-list (although, as has been noted, this is not always necessary; some lazy-lists are infinitely long).

Lazy-lists are built using something very much like CONS. In fact, we’ll call the function LAZY-CONS. The different between LAZY-CONS and CONS is that LAZY-CONS doesn’t immediately evaluate its second parameter. Instead it delays it. DELAY is a standard part of Scheme, but it isn’t in most Lisps. In Common Lisp, you would implement DELAY something like this:

(defun memoize (fun)
 (let ((already-evaled nil) (value nil))
 #’(lambda ()
 (if already-evaled
 (setf already-evaled t)
 (setf value (funcall fun)))))))

(defmacro delay (thing)
 ‘(memoize #’(lambda () ,thing)))

This is actually an optimized delay; the MEMOIZE function is a bizarre little thing that takes a function as a parameter and wraps it up in a neat little package that basically determines whether the function has ever been called before. If it hasn’t, it calls it and preserves (and returns) its returned value. If it has, it simply returns the stored value. In other words, any function that we pass to MEMOIZE will only get called once, assuming that what is actually called is the function that MEMOIZE returns.

DELAY, then, is a macro (so it doesn’t evaluate “thing,” which would defeat the whole purpose) that first makes a function that, when called, returns “thing” as its value (by wrapping “thing” in a lambda expression of no arguments), then passes that function to MEMOIZE, so that “thing” only gets evaluated when it absolutely has to, and even then only once.

Building a lazy-list, then, would look something like this:

;;Here is our lazy-list CONStructor:
(defmacro lazy-cons (thing lazy-list)
  ‘(cons ,thing (delay ,lazy-list)))

The flip side of building a lazy-list, of course, is accessing its elements. Since it’s like a list, we’ll write LAZY-CAR and LAZY-CDR to get at its components. LAZY-CAR is very straightforward:

;;Here is LAZY-CAR:
(defun lazy-car (lazy-list)
  (car lazy-list))

LAZY-CDR is only slightly more involved, since the remaining item in the lazy-list data structure has been delayed:

;;Here is LAZY-CDR:
(defun lazy-cdr (lazy-list)
  (force (cdr lazy-list)))

FORCE is used to return the value of the cdr of the lazy-list, and thanks to MEMOIZE, it will do so whether it’s the first time or the umpteenth time without re-evaluating the original object every time. Since the cdr of a lazy-list is a function, FORCE is simply:

(defun force (promise)
 (funcall promise))

With these functions, we have the bare-bones basics of the lazy-list manipulation functions. First let’s define a couple of useful items:

;;Define the empty lazy list:
(defconstant the-empty-lazy-list ‘())

;;How do we know if a lazy-list is empty?
(defun empty-lazy-list-p (lazy-list)
  (eq the-empty-lazy-list lazy-list))

Once we’ve defined these, for the sake of termination testing, formulating the remaining higher-order lazy-list functions isn’t too difficult. For example, appending lazy-list l2 to lazy-list l1 works like this:

;;This is to lazy lists what Common Lisp’s APPEND is to normal lists.
(defun append-lazy-lists (l1 l2)
  (if (empty-lazy-list-p l1)
    (lazy-cons (lazy-car l1)
               (append-lazy-lists (lazy-cdr l1) l2))))

Accumulating the elements of a lazy-list looks like this:

;; This is a nice, generic accumulation function that takes a combiner
;; function (usually #’+ or #’cons or something like that), an initial
;; value (typically 0 or 1 for numeric accumulations or ‘() for lists)
;; and some lazy-list to apply all of this to.
(defun accumulate (combiner initial-value lazy-list)
  (if (empty-lazy-list-p lazy-list)
    (funcall combiner (lazy-car lazy-list)
             (delay (accumulate combiner
                         (lazy-cdr lazy-list))))))

;;This function prevents infinite recursion when accumulating infinite
(defun interleave (l1 l2)
  (if (empty-lazy-list-p l1)
    (force l2)
    (lazy-cons (lazy-car l1)
               (interleave (force l2) (delay (lazy-cdr l1))))))

There are times when what you have is a lazy-list of lazy-lists, and you want to reduce that to a lazy-list of non-lazy-list elements. Using ACCUMULATE and INTERLEAVE, both defined above, it’s easy:

;;This handy thing uses ACCUMULATE to flatten a lazy-list of lazy-lists.
(defun flatten (lazy-list)
  (accumulate #’interleave the-empty-lazy-list lazy-list))

There are many times when you want a lazy-list that has been constructed by applying some function to every element of some other lazy-list. That’s what LAZY-MAP is for:

;;This maps some proc across every element of some lazy-list.
(defun lazy-map (proc lazy-list)
  (if (empty-lazy-list-p lazy-list)
    (lazy-cons (funcall proc (lazy-car lazy-list))
               (lazy-map proc (lazy-cdr lazy-list)))))

It’s also sometimes desirable to remove all elements from a lazy-list that don’t pass some test. FILTER is the function that does that:

;;This returns the lazy-list that contains all items that, when passed 
;;pred, return something non-NIL.
(defun filter (pred lazy-list)
  (cond ((empty-lazy-list-p lazy-list) the-empty-lazy-list)
        ((funcall pred (lazy-car lazy-list))
         (lazy-cons (lazy-car lazy-list)
                    (filter pred (lazy-cdr lazy-list))))
        (t (filter pred (lazy-cdr lazy-list)))))

The next function is useful on those occasions that you need to apply a function to a lazy-list that side-effects the lazy-list, rather than producing a new lazy-list:

;;This is an awful lot like LAZY-MAP, except that it doesn’t accumulate 
;;results, which is a fancy way of saying that you use LAZY-MAP if you 
;;a function result and FOR-EACH if you need side-effects.
(defun for-each (proc lazy-list)
  (if (empty-lazy-list-p lazy-list)
    (progn (funcall proc (lazy-car lazy-list))
           (for-each proc (lazy-cdr lazy-list)))))

Finally, there’s FLATMAP, which is just the combination of FLATTEN and LAZY-MAP. LAZY-MAP often produces a lazy-list of lazy-lists that needs to be flattened, so this is just some syntactic sugar to make that easier:

;;Flattening the result of lazy-mapping is so useful and so common that 

;;there’s a whole separate function for it.
(defun flatmap (f s)
  (flatten (lazy-map f s)))

Once in a great while, it’s nice to convert a regular list to a lazy-list:

;;Sometimes (ok, rarely) it’s nice to convert a list to a lazy-list:
(defun lazy-list (list)
  (if (null list)
    (lazy-cons (car list) (lazy-list (cdr list)))))

Another fairly common operation is nested mapping. That is, several lazy-lists are generated and some function is mapped across them, and the results are accumulated according to some other function. It’s worth defining a fairly complex macro, COLLECT, that will make code that does this easier to read, because the expanded code can be pretty illegible, and contrary to popular belief, good Lisp programmers do care about their code being readable! The goal of the COLLECT macro is to take expressions of the form:

(collect <result>
 ((<v1> <set1>)
  (<vn> <setn>))

and expand them to:

(lazy-map #’(lambda (tuple)
 (let ((<v1> (car tuple))...(<vn> (ca...dr tuple)))
 (filter #’(lambda (tuple)
 (let ((<v1> (car tuple))
       (<vn> (ca...dr tuple)))
 #’(lambda (<v1>)
 #’(lambda (<v2>)
 (lazy-map #’(lambda (<vn>)
 (list <v1>...<vn>))

It’s probably not too hard to see why you’d want a macro like COLLECT to create these kinds of structures. Abelson and Sussman doesn’t list this macro; it’s left as an exercise for the reader. Here’s my Common Lisp solution to that exercise:

;;This is the tricky one.  The COLLECT macro makes nested mappings a 
;;easier than they would be otherwise, but this is the most complex macro
;;I’ve ever had to write.  Here goes nothing:
(defmacro collect (result pairs &optional (restriction t))
  (let ((vars (mapcar #’car pairs))
        (sets (mapcar #’cadr pairs))
        (lets (genlets pairs)))
    ‘(lazy-map #’(lambda (tuple)
                   (let ,lets
               (filter #’(lambda (tuple)
                           (let ,lets
                       ,(genmaps vars sets)))))

;;Given a list of pairs, this creates a let body based on tuple.
(defun genlets (pairs)
  (do ((i (1- (length pairs)) (1- i))
       (result ‘() (cons (cons (car (nth i pairs))
 (list (list ‘nth i ‘tuple))) result)))
      ((< i 0) result)))

;;This beast generates the flatmap/lazy-map sequence for the vars and 
(defun genmaps (vars sets)
  (labels ((genmaps-1 (vars sets depth)
                      (if (null (cdr sets))
                        ‘(lazy-map #’(lambda (,(car (last vars)))
 (list ,@vars))
 ,(car sets))
                        ‘(flatmap #’(lambda (,(nth depth vars))
 ,(genmaps-1 vars (cdr sets) (1+ depth)))
 ,(car sets)))))
    (genmaps-1 vars sets 0)))

Infinitely long lazy-lists can be both interesting and useful:

;;An infinite lazy-list of 1’s:
(defconstant ones (lazy-cons 1 ones))

;;A function to add two lazy-lists together:
(defun add-lazy-lists (l1 l2)
  (cond ((empty-lazy-list-p l1) l2)
        ((empty-lazy-list-p l2) l1)
         (lazy-cons (+ (lazy-car l1) (lazy-car l2))
                    (add-lazy-lists (lazy-cdr l1) (lazy-cdr l2))))))

;;The lazy-list of all integers:
(defconstant integers (lazy-cons 1 (add-lazy-lists ones integers)))

;;A function to scale all elements of a lazy-list by some constant:
(defun scale-lazy-list (c lazy-list)
  (lazy-map #’(lambda (x) (* x c)) lazy-list))

There are many interesting things that can be done with these functions and data structures. It isn’t terribly difficult, for example, to define the lazy-list of all prime numbers:

;;Here’s the sieve of Eratosthenes written using lazy-list functions:
(defun sieve (lazy-list)
 (lazy-car lazy-list)
 (sieve (filter
 #’(lambda (x) (not (divisiblep x (lazy-car lazy-list))))
 (lazy-cdr lazy-list)))))

(defconstant primes (sieve (lazy-cdr integers)))

Theoretically, all prime numbers can be easily gotten from this data structure. In actual practice you’re likely to run out of memory asking for large ones. Speaking of asking for elements of lazy-lists, a LAZY-NTH function would be very helpful:

;;Like NTH for normal lists, except lazy:
(defun lazy-nth (n lazy-list)
 (if (= n 0)
 (lazy-car lazy-list)
 (lazy-nth (1- n) (lazy-cdr lazy-list))))

The hairy COLLECT macro that I wrote is used for nested mappings, as I mentioned above. One simple example is generating the lazy-list of all triples that represent wins on a tic-tac-toe board that’s numbered like this:

This tic-tac-toe board is actually a “magic square”--the numbers in all of the “win” directions add up to fifteen. So what we’re interested in generating is the lazy-list of all distinct positive integers i, j, and k less than or equal to n that sum to s where n is 9 and s is 15. Let’s write triples using collect:

;;Find all triples of i, j, and k <= n that sum to s.
(defun triples (n s)
 (collect (list i j k)
 ((i (enumerate-interval 1 n))
  (j (enumerate-interval 1 (1- i)))
  (k (enumerate-interval 1 (1- j))))
 (= (+ i j k) s)))

ENUMERATE-INTERVAL simply returns the lazy-list of integers from its first parameter to its second parameter, inclusive. It looks like this:

;;Return the lazy-list of integers from low to high:
(defun enumerate-interval (low high)
 (if (> low high)
 (lazy-cons low (enumerate-interval (1+ low) high))))

As you can see, lazy-lists can be a powerful medium for expressing some interesting mathematical and logical ideas. If there is interest, I can explore more uses in future articles. One other example could be a lazy-list-based solution to the classic eight-queens problem. Another, somewhat more practical, idea would be to discuss why lazy-lists are useful in the design and implementation of a forward- and backward-chaining inference engine for deductive information retrieval. As always, I love to hear from you readers what you want to hear, so please feel free to write or call!

That’s all for this time. Enjoy!


Community Search:
MacTech Search:

Software Updates via MacUpdate

Civilization VI 1.1.0 - Next iteration o...
Sid Meier’s Civilization VI is the next entry in the popular Civilization franchise. Originally created by legendary game designer Sid Meier, Civilization is a strategy game in which you attempt to... Read more
Network Radar 2.3.3 - $17.99
Network Radar is an advanced network scanning and managing tool. Featuring an easy-to-use and streamlined design, the all-new Network Radar 2 has been engineered from the ground up as a modern Mac... Read more
Printopia 3.0.8 - Share Mac printers wit...
Run Printopia on your Mac to share its printers to any capable iPhone, iPad, or iPod Touch. Printopia will also add virtual printers, allowing you to save print-outs to your Mac and send to apps.... Read more
ForkLift 3.2.1 - Powerful file manager:...
ForkLift is a powerful file manager and ferociously fast FTP client clothed in a clean and versatile UI that offers the combination of absolute simplicity and raw power expected from a well-executed... Read more
BetterTouchTool 2.417 - Customize multi-...
BetterTouchTool adds many new, fully customizable gestures to the Magic Mouse, Multi-Touch MacBook trackpad, and Magic Trackpad. These gestures are customizable: Magic Mouse: Pinch in / out (zoom... Read more
Little Snitch 4.0.6 - Alerts you about o...
Little Snitch gives you control over your private outgoing data. Track background activity As soon as your computer connects to the Internet, applications often have permission to send any... Read more
Google Chrome 65.0.3325.181 - Modern and...
Google Chrome is a Web browser by Google, created to be a modern platform for Web pages and applications. It utilizes very fast loading of Web pages and has a V8 engine, which is a custom built... Read more
TeamViewer 13.1.2991 - Establish remote...
TeamViewer gives you remote control of any computer or Mac over the Internet within seconds or can be used for online meetings. Find out why more than 200 million users trust TeamViewer! Free for non... Read more
Printopia 3.0.8 - Share Mac printers wit...
Run Printopia on your Mac to share its printers to any capable iPhone, iPad, or iPod Touch. Printopia will also add virtual printers, allowing you to save print-outs to your Mac and send to apps.... Read more
BetterTouchTool 2.417 - Customize multi-...
BetterTouchTool adds many new, fully customizable gestures to the Magic Mouse, Multi-Touch MacBook trackpad, and Magic Trackpad. These gestures are customizable: Magic Mouse: Pinch in / out (zoom... Read more

Latest Forum Discussions

See All

The best games that came out for iPhone...
It's not a huge surprise that there's not a massive influx of new, must-buy games on the App Store this week. After all, GDC is happening, so everyone's busy at parties and networking and dying from a sinister form of jetlag. That said, there are... | Read more »
Destiny meets its mobile match - Everyth...
Shadowgun Legends is the latest game in the Shadowgun series, and it's taking the franchise in some interesting new directions. Which is good news. The even better news is that it's coming out tomorrow, so if you didn't make it into the beta you... | Read more »
How PUBG, Fortnite, and the battle royal...
The history of the battle royale genre isn't a long one. While the nascent parts of the experience have existed ever since players first started killing one another online, it's really only in the past six years that the genre has coalesced into... | Read more »
Around the Empire: What have you missed...
Oh hi nice reader, and thanks for popping in to check out our weekly round-up of all the stuff that you might have missed across the Steel Media network. Yeah, that's right, it's a big ol' network. Obviously 148Apps is the best, but there are some... | Read more »
All the best games on sale for iPhone an...
It might not have been the greatest week for new releases on the App Store, but don't let that get you down, because there are some truly incredible games on sale for iPhone and iPad right now. Seriously, you could buy anything on this list and I... | Read more »
Everything You Need to Know About The Fo...
In just over a week, Epic Games has made a flurry of announcements. First, they revealed that Fortnite—their ultra-popular PUBG competitor—is coming to mobile. This was followed by brief sign-up period for interested beta testers before sending out... | Read more »
The best games that came out for iPhone...
It's not been the best week for games on the App Store. There are a few decent ones here and there, but nothing that's really going to make you throw down what you're doing and run to the nearest WiFi hotspot in order to download it. That's not to... | Read more »
Death Coming (Games)
Death Coming Device: iOS Universal Category: Games Price: $1.99, Version: (iTunes) Description: --- Background Story ---You Died. Pure and simple, but death was not the end. You have become an agent of Death: a... | Read more »
Hints, tips, and tricks for Empires and...
Empires and Puzzles is a slick match-stuff RPG that mixes in a bunch of city-building aspects to keep things fresh. And it's currently the Game of the Day over on the App Store. So, if you're picking it up for the first time today, we thought it'd... | Read more »
What You Need to Know About Sam Barlow’s...
Sam Barlow’s follow up to Her Story is #WarGames, an interactive video series that reimagines the 1983 film WarGames in a more present day context. It’s not exactly a game, but it’s definitely still interesting. Here are the top things you should... | Read more »

Price Scanner via

Thursday roundup of the best 13″ MacBook Pro...
B&H Photo has new 2017 13″ MacBook Pros on sale for up to $200 off MSRP. Shipping is free, and B&H charges sales tax for NY & NJ residents only. Their prices are the lowest available for... Read more
Sale: 9.7-inch 2017 WiFi iPads starting at $2...
B&H Photo has 9.7″ 2017 WiFi Apple iPads on sale for $40 off MSRP for a limited time. Shipping is free, and pay sales tax in NY & NJ only: – 32GB iPad WiFi: $289, $40 off – 128GB iPad WiFi: $... Read more
Roundup of Certified Refurbished iPads, iPad...
Apple has Certified Refurbished 9.7″ WiFi iPads available for $50-$80 off the cost of new models. An Apple one-year warranty is included with each iPad, and shipping is free: – 9″ 32GB WiFi iPad: $... Read more
Back in stock! Apple’s full line of Certified...
Save $300-$300 on the purchase of a 2017 13″ MacBook Pro today with Certified Refurbished models at Apple. Apple’s refurbished prices are the lowest available for each model from any reseller. A... Read more
Wednesday deals: Huge sale on Apple 15″ MacBo...
Adorama has new 2017 15″ MacBook Pros on sale for $250-$300 off MSRP. Shipping is free, and Adorama charges sales tax in NJ and NY only: – 15″ 2.8GHz Touch Bar MacBook Pro Space Gray (MPTR2LL/A): $... Read more
Apple offers Certified Refurbished Series 3 A...
Apple has Certified Refurbished Series 3 Apple Watch GPS models available for $50, or 13%, off the cost of new models. Apple’s standard 1-year warranty is included, and shipping is free. Numerous... Read more
12″ 1.2GHz Space Gray MacBook on sale for $11...
B&H Photo has the Space Gray 12″ 1.2GHz MacBook on sale for $100 off MSRP. Shipping is free, and B&H charges sales tax for NY & NJ residents only: – 12″ 1.2GHz Space Gray MacBook: $1199 $... Read more
Mac minis available for up to $150 off MSRP w...
Apple has restocked Certified Refurbished Mac minis starting at $419. Apple’s one-year warranty is included with each mini, and shipping is free: – 1.4GHz Mac mini: $419 $80 off MSRP – 2.6GHz Mac... Read more
Back in stock: 13-inch 2.5GHz MacBook Pro (Ce...
Apple has Certified Refurbished 13″ 2.5GHz MacBook Pros (MD101LL/A) available for $829, or $270 off original MSRP. Apple’s one-year warranty is standard, and shipping is free: – 13″ 2.5GHz MacBook... Read more
Apple restocks Certified Refurbished 2017 13″...
Apple has Certified Refurbished 2017 13″ MacBook Airs available starting at $849. An Apple one-year warranty is included with each MacBook, and shipping is free: – 13″ 1.8GHz/8GB/128GB MacBook Air (... Read more

Jobs Board

Payments Counsel - *Apple* Pay (payments, c...
# Payments Counsel - Apple Pay (payments, credit/debit) Job Number: 112941729 Santa Clara Valley, California, United States Posted: 26-Feb-2018 Weekly Hours: 40.00 Read more
Firmware Engineer - *Apple* Accessories - A...
# Firmware Engineer - Apple Accessories Job Number: 113452350 Santa Clara Valley, California, United States Posted: 28-Feb-2018 Weekly Hours: 40.00 **Job Summary** Read more
*Apple* Solutions Consultant - Apple (United...
# Apple Solutions Consultant Job Number: 113501424 Norman, Oklahoma, United States Posted: 15-Feb-2018 Weekly Hours: 40.00 **Job Summary** Are you passionate about Read more
*Apple* Inc. Is Look For *Apple* Genius Te...
Apple Inc. Is Look For Apple Genius Technical Customer Service Minneapolis Mn In Minneapolis - Apple , Inc. Apple Genius Technical Customer Service Read more
*Apple* Genius Technical Customer Service Co...
Apple Genius Technical Customer Service Columbus Oh Apple Inc. - Apple , Inc. Apple Genius Technical Customer Service Columbus Oh - Apple , Inc. Job Read more
All contents are Copyright 1984-2011 by Xplain Corporation. All rights reserved. Theme designed by Icreon.