Church Numerals
Volume Number:   7

Issue Number:   6

Column Tag:   Lisp Listener

Going Back to Church
By André van Meulebrouck, Chatsworth, CA
“Scheme is a very clear language and its tutors follow a zen philosophy. Anything unessential or controversial (i.e. not so well understood) is thrown away. The latest Scheme report is an admirable document and as semantical analysis progresses, slicing molecules then atoms and after that quarks I dare say that the revised report on Scheme will converge to lcalculus.” Christian Queinnec [Queinnec, 1990].
This article will attempt to prompt interesting insights into recursive functions, and evaluation strategies via looking at an alternate numeric system (Church numerals). It also touches on various concepts such as object oriented programming, dynamic scoping, and lazy streams.
(Sneak preview: imagine a numeric system wherein the operations; addition, subtraction, multiplication, and exponentiation all take roughly the same amount of time to compute regardless of the size of the numeric arguments! Church numerals accomplish this by returning functions that “promise” to do the computation later, yet these “promises” are still bona fide Church numerals that can be used in further computations. There are of course tradeoffs from such “laziness”. For instance, when Church numerals are converted to regular numbers, computations that weren’t done previously must be completed. Tools are provided for the reader to explore these tradeoffs.)
A Hope for the Future?!?
Perhaps you might think of Alonzo Church’s lcalculus (and numerals) as impractical mental gymnastics, but consider: many times in the past, seemingly impractical theories became the underpinnings of future technologies (for instance: Boolean Algebra).
Perhaps the reader can imagine a future much brighter and more enlightened than today. For instance, imagine computer architectures that run combinators or lcalculus as their machine instruction sets.
Imagine further that different models will be available. For instance one might be a really cheap, simple chip that will implement only pure lcalculus as its instruction set. This is the chip that manufacturers might want to use for controlling toasters or electrical systems in your car (space ship?). Without numbers, and with only limited need of integers, perhaps Church numerals might have practical import! Of course, the more expensive chips would probably have an extended lcalculus and full numeric capabilities. Perhaps the tradeoff between a pure lcalculus chip versus an extended lcalculus chip would be vaguely analogous to the RISC (Reduced Instruction Set Chip) versus CISC (Complex Instruction Set Chip) controversy. For instance, the pure lcalculus chip might be clocked faster by virtue of greater simplicity, and thereby might offer advantages over the extended lcalculus chip, there again tempting uses of Church numerals or other “soft” numeral systems. (“Soft” here meaning “software” as opposed to “hard” meaning “hardware” based.)
In reality, combinator reduction machines have been built already and research on them continues. [Peyton Jones, 1987] briefly describes various projects devoted to parallel (combinator) reduction machines. [Ramsdell, 1986] describes “the Curry Chip” which is a combinator reduction machine in VLSI.
The “Minimalist” Game
All conjectures aside, I think the most compelling motivation for studying lcalculus is the “minimalist” game.
I was introduced to a minimalist game by an exSoviet Russian instructor who would allow students a severely restricted set of words and grammatical constructs, then ask questions. The idea was that one’s expressive capability is not so much posited in how much one knows, but in how cleverly one wields what one knows. He was wont to point out that you are more likely to reveal yourself as a foreigner when you overextend your limits by trying to use grammatical constructs and words you aren’t comfortable with, than when you speak simply but correctly. Likewise, I propose lcalculus as a minimalist game for Computer Science.
One observation from playing the minimalist game in Russian: what makes this game hard or easy isn’t so much how limited the number of “primitives” is but rather how powerful they are. I claim lcalculus gives us (perhaps) the fewest possible primitives, and, they are of supreme quality.
Recap
In the last article we covered a lot of ground(re)consider the following.
Object oriented programming: The combinator versions of car, cdr, and cons make use of message passing: cons making a tuple object, and car and cdr do their thing by passing in messages to that object. While this use of message passing might seem primitive, it’s actually quite flexible and powerful, because the object doesn’t have to have a static knowledge of what possible messages could be passed in: the messages could be arbitrary functions. (The action taken on the message is to run the message itself!)
Here’s how a message passing example of car, cdr, and cons could be implemented in Scheme (see [Abelson et al., 1985] for similar).
(Scheme’s case statement is basically like case statements in other languages, except that the selectors are enclosed in parens. setcar! and setcdr! are destructive operators to change those respective fields [Rees, et al.,1986].)
;1
MacScheme™ Top Level
>>>
(define mycons
(lambda (x y)
(lambda (message . args)
(case message
((car) x)
((cdr) y)
((setcar!)
(set! x (car args)))
((setcdr!)
(set! y (car args)))
((?)
‘(car cdr setcar! setcdr! ?))
(else
(error “mycons: bad message” message))))))
mycons
>>> (define mycar
(lambda (object)
(object ‘car)))
mycar
>>> (define mycdr
(lambda (object)
(object ‘cdr)))
mycdr
>>> (define mysetcar!
(lambda (object towhat)
(object ‘setcar! towhat)))
mysetcar!
>>> (define mysetcdr!
(lambda (object towhat)
(object ‘setcdr! towhat)))
mysetcdr!
>>> (define foo (mycons ‘a ‘b))
foo
>>> (mycar foo)
a
>>> (mycdr foo)
b
>>> (mysetcar! foo 3)
3
>>> (mycar foo)
3
(Note: What the user types at the MacScheme prompt >>> is shown in italics. MacScheme responses are in boldface.
Code alluded to but not described in the text can be found at the end of the article.)
Lazy evaluation: the ideas behind Scheme’s force and delay were shown via thunks (lambda forms of no arguments). Our delay consisted of thunkifying the object to be delayed, and our force consisted of the function forceathunk which simply invoked the thunk with no arguments. force and delay are important because they are used to create lazy streams [Abelson et al., 1985].
Closures, free variables, lexical scoping and higherorder functions: Combinators nicely motivate a tour de force introduction to these issues because combinators rely on them so heavily.
Unraveling the secrets of Church numerals
The slick thing about Church numerals is that they are essentially preinitialized for loops ready to have a function and argument passed in.
Internally, a “canonical” Church numeral looks like:
;2
(lambda (f)
(lambda (x)
(f (f (f ... x))))))
wherein there can be any number of f’s (including none) applied to the argument x.
Since Church numerals are functions, they can be invoked (run). When invoked, a Church numeral will want to consume two arguments; the first being a function which will get bound with its parameter f. Then, it will return the function (lambda (x) ...) as its result. (lambda (x) ...) will then wait to be invoked on an argument, which it will bind to its parameter x. It will then run the (f (f (f ... x))) part, which is to say it will apply the function f to the argument x as many times as there are f’s. (The value of the Church numeral is determined by how many f’s get applied to the argument. If no f’s get applied, that represents 0.)
Note that there is nothing recursive about a Church numeral. It simply “iteratively” applies a function to an argument a predetermined number of times depending on how many nested f’s are specified in the (lambda (x) ...) part.
We can pass any function we want into a Church numeral! If we pass in the Scheme function 1+ for the function, then pass in 0 for the argument, the Church numeral will sum itself up: ((<Church numeral> 1+) 0) => <regular number>. (The process of running of a Church numeral on a function and an argument is herein called “unraveling”.)
Basically, Church numerals are a game of unraveling, using different things for the arguments.
For instance we can unravel, using a tuple maker for the function argument, or even another Church numeral! (Exercise for the reader: What arithmetic function gets implemented when we unravel using a Church numeral for the function argument? What happens if we “partially unravel” a Church numeral (invoke it on a function, but not on an argument) then use that as the function to some other Church numeral?)
To recurse, or not to recurse.
That is the question.
Previously, the use of recursion to express compred was likened to using a bulldozer to dig a hole for petunias. What about the recursion in comfact itself?
It turns out recursion is unnecessary there too. The same style trick used in compred can be used in comfact : create tuples that contain the result in the car field and the number we’re on in the cdr field. (See predstylefact in file “stuff.sch”.)
The initial tuple would be created by: (cons 1 1). The tuple maker would return a new tuple that has (* (car tuple) (cdr tuple)) in the car field and (succ (cdr tuple)) in the cdr field. The nesting of function calls in a Church numeral can then provide the correct number of function invocations of the tuple maker.
How about using the compred trick in implementing comquo ? Although Church numerals are like for loops, they do have the number of iterations the loop can do “hard wired” into each numeral. In the case of computing a quotient, we want to find out how many times a divisor can be subtracted from a dividend. When we find out, we call the result the quotient. Unfortunately, we don’t know ahead of time how many times to perform the subtraction. (That’s precisely what we’re trying to find out.) Therefore, we need a way to get an unpredetermined number of iterations, or, we need to find a way to “blow out” of function calls prematurelywe could then attempt to do the subtractions dividend times and blow out when the remainder is less than the divisor. (Question for the Überprogrammer: do we have a way to blow out of recursive calls prematurely in lcalculus? How about in Scheme?)
Recursion: A problem that keeps coming back to you.
In the last article, I posed the question of other ways to get the effect of recursion other than the Y combinator and by using state (i.e. using set! plus let to get letrec).
Let’s consider the problem again. You want to know your own definition while you’re inside your own definition. How could we achieve that? For that to happen, we’d have to have our own definition inside our own definition. How can we get our own definition inside? How do we normally get information inside a function? By passing it in!
So, instead of (define fact (lambda (n) ...) we want (define fact (lambda (f n) ...) and we must pass the fact definition in so that it will get bound with f [Dertouzos et al., 1974] (Also [Gabriel, 1988]). The recursive call will now look like: (f f (1 n)) instead of (fact (1 n)). The initial call will get set in motion like this: (fact fact 5). (See recursionlessfact in file “stuff.sch”.)
Note that this method of recursion elimination requires actually tweaking the function we’re eliminating recursion on. Specifically, it makes an narity (“arity” refers to the number of arguments a function expects) function into an n+1 arity function. Y requires us to perform an abstraction on any function we want to eliminate recursion from, but abstraction merely encloses it in another lambda form; it doesn’t alter the “guts” of the function itself.
Note that neither this trick, nor Y alleviates the need for a stack (at some lower level). Invoking lambda forms on arguments requires a stack (at least the way we’re doing it). (Question for the reader: if we implement a lcalculus interpreter which does everything in a purely syntactic fashion would we still need a stack?)
Überprogrammer question: Are there any other ways to get recursion, or have we covered them all?
Dynamic Scoping
I know it’s hard to believe now, but once upon a time, LISPs scoped dynamically. That is to say free variables derived their meanings from their callers’ environments rather than parental environments (“environment of definition”).
Our minimalist game would be in grave danger with dynamic scoping because when you pass around a function that has free variables in it, there’s a good chance you’ll snag a variable of the same name in a caller’s environment rather than in the (usually) intended parent’s environment.
Here’s an example to illustrate the difference between dynamic and lexical scoping. Below returns 5 because Scheme scopes lexically, but would return 3 if scoping were dynamic.
;3
>>> (define x 5)
x
>>> (define foo
(lambda ()
(fido 3)))
foo
>>> (define fido
(lambda (x)
(fifi)))
fido
>>> (define fifi
(lambda ()
x))
fifi
>>> (foo)
5
If you recall from the previous article, I mentioned how scoping behaves rather oddly once the global (top level) environment is reached. For instance, you can mention functions and/or variables that don’t exist when defining something as long as you define the missing things before you invoke anything that uses them. Do you think it’s fair to say that modern LISPs scope lexically until the top level, at which time scoping becomes dynamic?
Recursion via lazy evaluation?
In trying to answer the openended question of what other ways recursion can be expressed other than Y, assignment, and passing the needed function as an argument, one might be tempted to think lazy streams might help.
Consider for instance this lazy stream [Abelson et al., 1985].
;4
>>> (define ones (cons 1 (lambda () ones)))
ones
>>> (car ones)
1
>>> (car (forceathunk (cdr ones)))
1
>>>
(car (forceathunk (cdr (forceathunk (cdr ones)))))
1
How about if we went back to the letrec via let plus set! example from the previous article, and thunkified the recursive call to try to get the same sort of behavior that the ones example exhibits. Will it work?
No, because the reason the ones example works is not entirely due to lazy evaluation. It relies on peculiarities of the top level. This can be shown more clearly by converting the ones example into a local (rather than global) piece of code and changing names (lest we inadvertently grab something that already exists).
;5
>>> (let ((myones (cons 1 (lambda () myones))))
(car (forceathunk (cdr myones))))
ERROR: Undefined global variable
myones
Entering debugger. Enter ? for help.
debug:>
Y curry?
Notice how applicativeordery assumes that the function it’s going to be applied to is a function of one argument. What happens if we want to use Y to get rid recursion in functions that have more than one argument?
This is no problemcurrying takes care of this, because all combinators only have one argument anyway. If there are additional arguments, they can’t be “seen” until one argument is consumed, at which time one more parameter will be ready to take an argument. In this way, they are like retractable claws! They stay out of the way until they are needed.
If we didn’t use currying we would have to have a different version of applicativeordery for each arity we want to handle (i.e. one version for functions of one argument, another version for functions of two arguments, etc.).
(Another approach would be to make use of a variable argument mechanism as is present in Common LISP but that violates our draconian adherence to combinators and perhaps seems less uniform, elegant, and simple.)
Spatial considerations of Church numerals
An observation: Church numerals can take a lot of space if representing n requires n function calls.
Instead, we could find out what the prime factors of n are, then make a Church numeral from multiplying those primes together, with exponentiated factors exponentiated!
Plan: write a function primes that computes the prime factors of a number, then use it to make more space efficient Church numerals. (See file “primes.sch”.)
>>> (primes 32769)
(331 11 (3 . 2))
In the above example, we find 32,769 consists of prime factors 331, 11, and 3 to the 2nd power. Thus we could make the Church numeral for 32,769 by multiplying 331, 11 and the exponentiation of 3 to the 2nd power. The number of nested function calls will now be 331 + 11 + 3 + 2 = 347. Contrast this with 32,769 nested function calls.
To make these type of Church numerals, use the function number>compactchurch (see file “primes.sch”).
Speeding up Exponentiation of Regular Numbers
An aside: Exponentiation can be made more efficient for regular numbers (in a way that’s vaguely reminiscent of Church numeral composition). Since 2^4 is really 2^2 * 2^2, we can compute 2^2 and multiply that result together, and likewise, 2^2 is really 2^1 * 2^1 , etc. [Abelson et al., 1985], [Knuth, 1981]. This saves having to do a lot of recomputations. For a number to an odd power, we do: m * m^(n  1). For example, 2^5, that is nothing more than 2 * 2^4. So, in the general case of m^n, rather than multiplying m together n  1 times, we can use a divide and conquer algorithm, whereby we divide n in half until n becomes 0, then multiply all the intermediate results together. (See function pow in file “primes.sch”.)
(Illustration: the divide and conquer strategy creates a tree, but only concerns itself with the encircled parts. When the recursion bottoms out, the multiplications begin; squaring 2^1 to give 2^2 which then gets squared to give 2^4.)
How many multiplications will the divide and conquer strategy require? The mathematical function that answers the question: “How many times can you divide a number by 2” is the log function wherein the base of the log is what you’re dividing by. So, for us it would be log to the base 2 of n: log2n . In the below example, the number of multiplications will be log2200. On a calculator that has an “ln” key but no “log” key, this can be computed using natural logarithms (logs to the base e) as follows: ln(n) / ln(base) = ln(200) / ln(2) = 7.x. Contrast that with 199 multiplications!
>>> (pow 2 200)
1606938044258990275541962092341162602522202993782792835301376
Temporal considerations of Church numerals
Whereas regular numbers do computations completely when an operation is requested, many Church numeral arithmetic operations are “lazy” in that they do as little work as possible until being forced into completing the job (i.e. when unraveling them). Specifically, they return a function which “promises” to do the computation should the numeral ever get unraveled.
Functions that don’t do any computation at computation time are efficient “computers” (herein, “computer” refers to a numeral’s behavior at computation time), and such functions take roughly the same amount of time to compute results, regardless of the size of the numerical arguments (which are simply pointer references within a closure).
The downside to this way of doing things is that Church numerals can be gluttons to “unravel”. And when we convert Church numerals to regular numbers, we are forced to unravel them. How efficiently they unravel can depend on how they were builtnot all arithmetic routines build equally efficient numerals. (See file “stuff.sch” for the pathologically built: slowthree. Note that by calling normalizechurch, any arbitrarily inefficient Church numeral can be converted to one of the same efficiency as those built by number>church. normalizechurch simply unravels using comsucc as the function and comzero as the argument. For all stats tests in file “stuff.sch”, the numerals used were created manually so that the statistics wouldn’t be thrown off. )
Perhaps the most interesting thing we can do with Church numerals is to unravel them, and that’s what takes the most time. So, let’s make a distinction between unraveling that takes place at computation time and unraveling that takes place at other times.
Herein, an “unraveler” or “unraveling <function>” will refer to a function or numeral that unravels at computation time, and an “unraveller” will refer to a numeral’s unraveling behavior at other than computation time.
compred is unraveler. At computation time, it “conses” (creates tuples, which require space) while unraveling. It conses n times where n is the value the Church numeral represents. It returns a result that is a “straight unraveller” (a straight unraveller is a numeral that’s built like number>church builds them). Note that straight unravellers aren’t as efficient as they would be if we had defined them “manually” (as is done near the bottom of the file “stuff.sch”). A straight unraveller will unravel n + 1 times. (One unravel is done for comzero.)
comsub is an efficient computer, however it is not being burdened with checking that m < n doesn’t happen when subtracting m  n. If it had to check for that case, it would probably be calling an unraveling predicate which would then make comsub into an inefficient computer.
comsub is an inefficient unraveller because it conses and has embedded unravellers. The consing in comsub for m  n can be described by the expression:
The consing in comquo (the quotient function) can be described by the expression: “YUK!” because comquo compounds the inefficiencies of comsub (which in turn compounds the inefficiencies of compred).
comquo is an inefficient computer and unraveller. At computation time, it calls com<? which makes use of the unraveling predicate comzero? . At unraveling time, it conses and does embedded unravelings.
comadd is an efficient computer. For m + n, the result will consist of a straight unraveller for n and two unravelings for m. I.e. there will be (n + 1) + 2 unravelings.
commul is an efficient computer. As an unraveller, it results in 1 unravel and 2 partial unravels.
compow is almost an efficient computer. At computation time it does a partial unravel. As an unraveller, it does 1 unravel.
Notice that compow, commul, and comadd all basically do different forms of function composition. The definition of commul is simpler than comadd, and compow is simpler yet: it’s just directly composes two Church numerals! Basically the difference between these functions is a matter of how much unraveling takes place before composition.
Perhaps if one were to do a lot of computations and very few decodings of Church numerals to regular numbers, Church math might gain some efficiencies from its lazy style of doing things. Also, normalizechurch could be used to convert poor unravellers to efficient unravellers before any math is done on them (lest existing inefficiencies get compounded).
(Code note: the objects used by function stats, including function stats, are message passing objects with local state, modelled after the cons example in the recap. The message ‘? can be used to find out what messages an object will accept.)
Sometimes it’s okay to be lazy!
Here’s an example of a form of laziness at work in some other numerical system. Consider the (Chinese) abacus.
There are three different ways to represent 10 (and powers of 10 beyond 1). While one could “normalize” every time a computation produced a result that needed to be normalized, perhaps it might be more efficient to wait until another computation forces normalization.
In fact, if one were to normalize at every chance, one might find that certain beads were pushed up and down more than once, whereas they might have been moved around only once if one weren’t so anxious to normalize. Sometimes the normalizations can be subsumed into future computations. Sometimes if you don’t do something right away, the need to do it may later go away. (I.e. Sometimes it pays to be lazy and only do work that is proven to be needed.)
(A trivial example: Imagine all the ones beads being pushed up, representing 5. This situation seems like having the bases loaded in baseball, so the temptation might be to “normalize” that by pushing all the ones beads back down and pushing up one of the fives beads. That would later enable trivially adding anything less than 5. However, if the ones beads were left pushed up, and the next number to be added was 3, we could push up a fives bead and push down 2 of the ones beads since 3 = 5  2. Contrast how many beads get moved around this way versus how many would have been moved around had we normalized before adding 3.
A parting observation: The abacus is so redundant that it often allows choices as to when and how to normalize, or not normalize. This allows room for personal style.)
Consumer oriented numerals
In file “stuff.sch”, (what are herein called) “consumer” numerals are introduced. Essentially, consumer numerals are a more formalized way of thinking about nconsumers (which might be a déjà vu for the reader as they were actually used in the last article to describe how comzero? works).
Consumer numerals are not nearly as “verb” oriented as Church numerals (which are strongly function oriented as can be seen by the number of function invocations (“fireworks”) that can be set in motion when a numeral is invokedconsider the case of the powerful composition at work in compow for example). At the same time consumer numerals aren’t as “noun” oriented as list numerals (wherein n is represented by a list of n elementslist numerals are a treatment of numbers as passive data objects).
For consumer numerals, abstraction becomes the successor function and function application becomes the predecessor function. n is represented in terms of how many arguments the numeral is capable of consuming.
Odds and Ends
Note the following behavior:
>>> (comnull? ‘a)
ERROR: Bad procedure
a
Entering debugger. Enter ? for help.
debug:>
Is this due to evaluation differences between Scheme (applicative order) and lcalculus (normal order) like we encountered in implementing comif ? Or, is comnull? designed to only accept comnil or a tuple made by comcons ?
Exercise: Compare and contrast the Révész version of comadd (see file “combinators.sch”) with the vanMeule version (which was designed in the style of comsub). The unraveling in the Révész comadd should be made explicit via calls to unravel and/or partialunravel if you want to use stats to analyze its performance. Could commul be written in terms of comadd, and compow in terms of commul ? (The question is really one of making use of an operator that wants two args versus a unary operator like comsucc.)
Exercise from last article: making thunks tow the line. If we want thunks to have one argument (as the rest of our combinators have) we can do the following.
To thunkify an expression <E> wrap a lambda of one argument around it like this: (lambda (x) <E>) where x does not “free occur” in <E>. I.e. x must not be a free variable in <E> lest we inadvertently snag the x of the thunk when <E> gets evaluated.
To force a thunk: notice that our new thunks are like consumer oriented numerals. We therefore want to take the predecessor, which is to say we want to force it by applying it to <anything>.
;6
(define newforce
(lambda (thunk)
(thunk ‘anything)))
Looking Ahead
In file “combinators.sch” the reader will find a goodly number of combinators defined.
The reader can now write sorting routines, mapping functions, and various other list oriented functions.
It might be interesting at this point to implement a “metacircular” interpreter using combinators (an evaluator written in the same language it evaluates is said to be metacircular [Abelson et al., 1985]). This could serve various purposes such as allowing us to:
1) See exactly what is required. Do we currently have everything we need to implement LISP? (How would we handle state?) How about a lcalculus interpreter?
2) Explore language and implementation issuesthe restrictiveness of combinators might enable issues to come to light that might otherwise get missed.
3) Be able to see the guts of combinators! Unfortunately, the trend these days is towards abolishing interpreters. (MacScheme compiles everything when you press <enter>.) Using a metacircular interpreter to run our combinators, we could then see what they look like internally, for instance, it would be nice if identity evaluated to: (closedlambda (x) x) . In Scheme it evaluates to #<PROCEDURE identity> because it gets compiled to some unprintable form.
4) Metacircular interpreters are useful for prototyping new languagesonce we’ve got one we might want to tweak it to evaluate in normal order instead of applicative order. Thereafter, a tweak could be made to allow partial evaluation in order to evolve our metacircular interpreter closer to a lcalculus interpreter.
All these things are beyond the game plan for this articleI merely wanted to suggest possible directions that could be pursued.
“Thanks” to:
• Henry Baker for donating the title of this article (full well realizing that no good deed goes unpunished), and for showing me the (fact fact ...) trick.
• Verbosity buster John Koerber for valiant efforts against the Department of Redundancy Department.
• The jacuzzi in which this article was conjured.
Bugs/infelicities due to chlorine vapors.
Bibliography and References
[Abelson et al, 1985] Harold Abelson and Gerald Jay Sussman with Julie Sussman. Structure and Interpretation of Computer Programs. MIT Press, Cambridge, Massachusetts, USA, 1985.
[Dertouzos et al., 1974] Michael L. Dertouzos, .Stepehn A. Ward, Joseph Weizenbaum. Course 6.031 Structure and Interpretation of Computer Languages. MIT, Cambridge, MA 197475.
[Field et al., 1989] Anthony J. Field and Peter G. Harrison. Functional Programming. AddisonWesley Publishing Company. First printed 1988. Reprinted 1989.
[Gabriel, 1988] Richard P. Gabriel. The Why of Y. LISP Pointers, vol. 2, no. 2 OctoberNovemberDecember, 1988.
[Katz, 1988] Morry J. Katz. Katz’s notes from Information Sciences Function sponsored lecture series on Computer Science. Rockwell International Science Center, Thousand Oaks, CA, March, 1988.
[Knuth, 1981] Donald E. Knuth. The Art of Computer Programming, second edition, vol. 2, Seminumerical Algorithms. AddisonWesley Publishing Company, 1981.
[Michaelson, 1989] Greg Michaelson. An Introduction to Functional Programming through Lambda Calculus. AddisonWesley Publishing Company, 1989.
[Peyton Jones, 1987] Simon L. Peyton Jones. The Implementation of Functional Programming Languages. PrenticeHall International, 1987.
[Queinnec, 1990] Christian Queinnec. A Subjective view of Lisp. LISP Pointers; Volume 3, Number 1. ACM, NY, July 1989March 1990.
[Ramsdell, 1986] John D. Ramsdell. The Curry Chip. In Proceedings of the 1986 ACM Conference on LISP and Functional Programming. Cambridge, MA, August 46, 1986.
[Rees et al, 1986] Jonathan Rees and William Clinger (editors). Revised3 Report on the Algorithmic Language Scheme; AI Memo 848a. MIT Artificial Intelligence Laboratory, Cambridge, Massachusetts, USA, September 1986.
[Révész, 1988] György E. Révész. LambdaCalculus, Combinators, and Functional Programming. Cambridge University Press, Cambridge, England, 1988.
. . .
The Scheme combinators presented herein were derived from lcalculus versions. Except for Y and others where noted, the lcalculus versions are from [Révész, 1988].
MacScheme™ is put out by Lightship Software, P.O. Box 1636, Beaverton, OR 97075 USA. Phone: (503) 6436909.
. . .
André can be reach on the Internet:
vanMeule@cup.portal.com
; File: combinators.sch. (Eval first.)
;
; Projection functions.
;
(define identity ; project1stof1
(lambda (x) x))
;
(define project1stof2
(lambda (x)
(lambda (y)
x)))
;
(define project2ndof2
(lambda (x)
identity))
;
(define project3rdof3
(lambda (x)
(lambda (y)
identity)))
;
(define 3consumer project3rdof3)
;
; Booleans and conditionals.
;
(define comtrue
project1stof2)
;
(define comfalse
project2ndof2)
;
(define forceathunk ; used by comif and others.
(lambda (thunk)
(thunk)))
;
(define comif
(lambda (condition)
(lambda (then)
(lambda (else)
(forceathunk ((condition then) else))))))
;
(define comnot ; [Michaelson, 1989]
(lambda (x)
(((comif x)
(lambda () comfalse))
(lambda () comtrue))))
;
(define comand ; [Field, 1989]
(lambda (x)
(lambda (y)
((x y) comfalse))))
;
(define comor ; [Field, 1989]
(lambda (x)
(lambda (y)
((x comtrue) y))))
;
; List primitives.
;
(define comcons
(lambda (x)
(lambda (y)
(lambda (selector)
((selector x) y)))))
;
(define comcar
(lambda (object)
(object project1stof2)))
;
(define comcdr
(lambda (object)
(object project2ndof2)))
;
(define comnil ; project2ndof3
(lambda (x) ; [Field, 1989]
comtrue)) ;
;
(define comnull? ; [Field, 1989]
(lambda (tuple)
(tuple (lambda (head)
(lambda (tail)
comfalse)))))
;
; Y combinator.
;
(define applicativeordery
(lambda (f)
((lambda (x) (f (lambda (arg) ((x x) arg))))
(lambda (x) (f (lambda (arg) ((x x) arg)))))))
;
; The Mother of All Church numerals.
;
(define comzero
project2ndof2)
;
; Church numeral predicates.
;
(define comzero?
(lambda (n)
(((unravel n) 3consumer) comtrue)))
;
(define comeven? ; [Révész, 1988] (not in book)
(lambda (n)
(((unravel n) comnot) comtrue)))
;
(define comodd? ; [Révész, 1988] (not in book)
(lambda (n)
(((unravel n) comnot) comfalse)))
;
(define com<? ; [vanMeule]
(applicativeordery
(lambda (lessthan?)
(lambda (x)
(lambda (y)
(((comif (comzero? x))
(lambda ()
(((comif (comzero? y))
(lambda () comfalse))
(lambda () comtrue))))
(lambda ()
(((comif (comzero? y))
(lambda () comfalse))
(lambda () ((lessthan? (compred x))
(compred y)))))))))))
;
; Church numeral operators.
;
(define comsucc
(lambda (n)
(lambda (f)
(lambda (x)
(f (((unravel n) f) x))))))
;
(define makeascendingtuple ; part of pred
(lambda (tuple)
((comcons
(comcdr tuple))
(comsucc (comcdr tuple)))))
;
(define initialpredtuple ; part of pred
((comcons "compred called on 0")
comzero))
;
(define compred
(lambda (n)
(comcar
(((unravel n)
makeascendingtuple)
initialpredtuple))))
;
(define comadd ; Révész version [Révész, 1988]
(lambda (m)
(lambda (n)
(lambda (f)
(lambda (x)
((m f)
((n f) x)))))))
;
(define comadd ; [vanMeule]
(lambda (m)
(lambda (n)
(lambda (f)
(lambda (x)
(((unravel (((unravel n) comsucc) m))
f) x))))))
;
(define comsub ; [vanMeule]
(lambda (m)
(lambda (n)
(lambda (f)
(lambda (x)
(((unravel (((unravel n) compred) m))
f) x))))))
;
(define commul
(lambda (m)
(lambda (n)
(lambda (f)
((partialunravel m)
((partialunravel n) f))))))
;
(define comquo ; [vanMeule]
(applicativeordery
(lambda (thequo)
(lambda (dividend)
(lambda (divisor)
(((comif ((com<? dividend) divisor))
(lambda () comzero))
(lambda ()
(comsucc ((thequo ((comsub dividend)
divisor))
divisor)))))))))
;
(define comrem) ; Reader defines remainder.
;
(define compow ; [Katz, 1988]
(lambda (m)
(lambda (n)
((partialunravel n) m))))
;
; Church numeral utility functions.
;
(define number>church ; makechurchnumeral
(lambda (n)
(if (zero? n)
comzero
(comsucc
(number>church ( n 1))))))
;
(define unravel
(lambda (n)
(lambda (f)
(lambda (x)
((n f) x)))))
;
(define partialunravel
(lambda (n)
(lambda (f)
(n f))))
;
(define church>number ; dechurchifynumeral
(lambda (churchnumeral)
(((unravel churchnumeral) 1+) 0)))
;
(define comone
(lambda (f)
(lambda (x)
(f x))))
;
'done