|Column Tag:||Lisp Listener
Going Back to Church
By André van Meulebrouck, Chatsworth, CA
Scheme is a very clear language and its tutors follow a zen philosophy. Anything unessential or controversial (i.e. not so well understood) is thrown away. The latest Scheme report is an admirable document and as semantical analysis progresses, slicing molecules then atoms and after that quarks I dare say that the revised report on Scheme will converge to l-calculus. Christian Queinnec [Queinnec, 1990].
This article will attempt to prompt interesting insights into recursive functions, and evaluation strategies via looking at an alternate numeric system (Church numerals). It also touches on various concepts such as object oriented programming, dynamic scoping, and lazy streams.
(Sneak preview: imagine a numeric system wherein the operations; addition, subtraction, multiplication, and exponentiation all take roughly the same amount of time to compute regardless of the size of the numeric arguments! Church numerals accomplish this by returning functions that promise to do the computation later, yet these promises are still bona fide Church numerals that can be used in further computations. There are of course trade-offs from such laziness. For instance, when Church numerals are converted to regular numbers, computations that werent done previously must be completed. Tools are provided for the reader to explore these trade-offs.)
A Hope for the Future?!?
Perhaps you might think of Alonzo Churchs l-calculus (and numerals) as impractical mental gymnastics, but consider: many times in the past, seemingly impractical theories became the underpinnings of future technologies (for instance: Boolean Algebra).
Perhaps the reader can imagine a future much brighter and more enlightened than today. For instance, imagine computer architectures that run combinators or l-calculus as their machine instruction sets.
Imagine further that different models will be available. For instance one might be a really cheap, simple chip that will implement only pure l-calculus as its instruction set. This is the chip that manufacturers might want to use for controlling toasters or electrical systems in your car (space ship?). Without numbers, and with only limited need of integers, perhaps Church numerals might have practical import! Of course, the more expensive chips would probably have an extended l-calculus and full numeric capabilities. Perhaps the trade-off between a pure l-calculus chip versus an extended l-calculus chip would be vaguely analogous to the RISC (Reduced Instruction Set Chip) versus CISC (Complex Instruction Set Chip) controversy. For instance, the pure l-calculus chip might be clocked faster by virtue of greater simplicity, and thereby might offer advantages over the extended l-calculus chip, there again tempting uses of Church numerals or other soft numeral systems. (Soft here meaning software as opposed to hard meaning hardware based.)
In reality, combinator reduction machines have been built already and research on them continues. [Peyton Jones, 1987] briefly describes various projects devoted to parallel (combinator) reduction machines. [Ramsdell, 1986] describes the Curry Chip which is a combinator reduction machine in VLSI.
The Minimalist Game
All conjectures aside, I think the most compelling motivation for studying l-calculus is the minimalist game.
I was introduced to a minimalist game by an ex-Soviet Russian instructor who would allow students a severely restricted set of words and grammatical constructs, then ask questions. The idea was that ones expressive capability is not so much posited in how much one knows, but in how cleverly one wields what one knows. He was wont to point out that you are more likely to reveal yourself as a foreigner when you overextend your limits by trying to use grammatical constructs and words you arent comfortable with, than when you speak simply but correctly. Likewise, I propose l-calculus as a minimalist game for Computer Science.
One observation from playing the minimalist game in Russian: what makes this game hard or easy isnt so much how limited the number of primitives is but rather how powerful they are. I claim l-calculus gives us (perhaps) the fewest possible primitives, and, they are of supreme quality.
In the last article we covered a lot of ground--(re)consider the following.
Object oriented programming: The combinator versions of car, cdr, and cons make use of message passing: cons making a tuple object, and car and cdr do their thing by passing in messages to that object. While this use of message passing might seem primitive, its actually quite flexible and powerful, because the object doesnt have to have a static knowledge of what possible messages could be passed in: the messages could be arbitrary functions. (The action taken on the message is to run the message itself!)
Heres how a message passing example of car, cdr, and cons could be implemented in Scheme (see [Abelson et al., 1985] for similar).
(Schemes case statement is basically like case statements in other languages, except that the selectors are enclosed in parens. set-car! and set-cdr! are destructive operators to change those respective fields [Rees, et al.,1986].)
MacScheme Top Level
(lambda (x y)
(lambda (message . args)
(set! x (car args)))
(set! y (car args)))
(car cdr set-car! set-cdr! ?))
(error my-cons: bad message message))))))
>>> (define my-car
>>> (define my-cdr
>>> (define my-set-car!
(lambda (object to-what)
(object set-car! to-what)))
>>> (define my-set-cdr!
(lambda (object to-what)
(object set-cdr! to-what)))
>>> (define foo (my-cons a b))
>>> (my-car foo)
>>> (my-cdr foo)
>>> (my-set-car! foo 3)
>>> (my-car foo)
(Note: What the user types at the MacScheme prompt >>> is shown in italics. MacScheme responses are in boldface.
Code alluded to but not described in the text can be found at the end of the article.)
Lazy evaluation: the ideas behind Schemes force and delay were shown via thunks (lambda forms of no arguments). Our delay consisted of thunkifying the object to be delayed, and our force consisted of the function force-a-thunk which simply invoked the thunk with no arguments. force and delay are important because they are used to create lazy streams [Abelson et al., 1985].
Closures, free variables, lexical scoping and higher-order functions: Combinators nicely motivate a tour de force introduction to these issues because combinators rely on them so heavily.
Unraveling the secrets of Church numerals
The slick thing about Church numerals is that they are essentially pre-initialized for loops ready to have a function and argument passed in.
Internally, a canonical Church numeral looks like:
(f (f (f ... x))))))
wherein there can be any number of fs (including none) applied to the argument x.
Since Church numerals are functions, they can be invoked (run). When invoked, a Church numeral will want to consume two arguments; the first being a function which will get bound with its parameter f. Then, it will return the function (lambda (x) ...) as its result. (lambda (x) ...) will then wait to be invoked on an argument, which it will bind to its parameter x. It will then run the (f (f (f ... x))) part, which is to say it will apply the function f to the argument x as many times as there are fs. (The value of the Church numeral is determined by how many fs get applied to the argument. If no fs get applied, that represents 0.)
Note that there is nothing recursive about a Church numeral. It simply iteratively applies a function to an argument a predetermined number of times depending on how many nested fs are specified in the (lambda (x) ...) part.
We can pass any function we want into a Church numeral! If we pass in the Scheme function 1+ for the function, then pass in 0 for the argument, the Church numeral will sum itself up: ((<Church numeral> 1+) 0) => <regular number>. (The process of running of a Church numeral on a function and an argument is herein called unraveling.)
Basically, Church numerals are a game of unraveling, using different things for the arguments.
For instance we can unravel, using a tuple maker for the function argument, or even another Church numeral! (Exercise for the reader: What arithmetic function gets implemented when we unravel using a Church numeral for the function argument? What happens if we partially unravel a Church numeral (invoke it on a function, but not on an argument) then use that as the function to some other Church numeral?)
To recurse, or not to recurse.
That is the question.
Previously, the use of recursion to express com-pred was likened to using a bulldozer to dig a hole for petunias. What about the recursion in com-fact itself?
It turns out recursion is unnecessary there too. The same style trick used in com-pred can be used in com-fact : create tuples that contain the result in the car field and the number were on in the cdr field. (See pred-style-fact in file stuff.sch.)
The initial tuple would be created by: (cons 1 1). The tuple maker would return a new tuple that has (* (car tuple) (cdr tuple)) in the car field and (succ (cdr tuple)) in the cdr field. The nesting of function calls in a Church numeral can then provide the correct number of function invocations of the tuple maker.
How about using the com-pred trick in implementing com-quo ? Although Church numerals are like for loops, they do have the number of iterations the loop can do hard wired into each numeral. In the case of computing a quotient, we want to find out how many times a divisor can be subtracted from a dividend. When we find out, we call the result the quotient. Unfortunately, we dont know ahead of time how many times to perform the subtraction. (Thats precisely what were trying to find out.) Therefore, we need a way to get an unpredetermined number of iterations, or, we need to find a way to blow out of function calls prematurely--we could then attempt to do the subtractions dividend times and blow out when the remainder is less than the divisor. (Question for the Überprogrammer: do we have a way to blow out of recursive calls prematurely in l-calculus? How about in Scheme?)
Recursion: A problem that keeps coming back to you.
In the last article, I posed the question of other ways to get the effect of recursion other than the Y combinator and by using state (i.e. using set! plus let to get letrec).
Lets consider the problem again. You want to know your own definition while youre inside your own definition. How could we achieve that? For that to happen, wed have to have our own definition inside our own definition. How can we get our own definition inside? How do we normally get information inside a function? By passing it in!
So, instead of (define fact (lambda (n) ...) we want (define fact (lambda (f n) ...) and we must pass the fact definition in so that it will get bound with f [Dertouzos et al., 1974] (Also [Gabriel, 1988]). The recursive call will now look like: (f f (1- n)) instead of (fact (1- n)). The initial call will get set in motion like this: (fact fact 5). (See recursionless-fact in file stuff.sch.)
Note that this method of recursion elimination requires actually tweaking the function were eliminating recursion on. Specifically, it makes an n-arity (arity refers to the number of arguments a function expects) function into an n+1 arity function. Y requires us to perform an abstraction on any function we want to eliminate recursion from, but abstraction merely encloses it in another lambda form; it doesnt alter the guts of the function itself.
Note that neither this trick, nor Y alleviates the need for a stack (at some lower level). Invoking lambda forms on arguments requires a stack (at least the way were doing it). (Question for the reader: if we implement a l-calculus interpreter which does everything in a purely syntactic fashion would we still need a stack?)
Überprogrammer question: Are there any other ways to get recursion, or have we covered them all?
I know its hard to believe now, but once upon a time, LISPs scoped dynamically. That is to say free variables derived their meanings from their callers environments rather than parental environments (environment of definition).
Our minimalist game would be in grave danger with dynamic scoping because when you pass around a function that has free variables in it, theres a good chance youll snag a variable of the same name in a callers environment rather than in the (usually) intended parents environment.
Heres an example to illustrate the difference between dynamic and lexical scoping. Below returns 5 because Scheme scopes lexically, but would return 3 if scoping were dynamic.
>>> (define x 5)
>>> (define foo
>>> (define fido
>>> (define fifi
If you recall from the previous article, I mentioned how scoping behaves rather oddly once the global (top level) environment is reached. For instance, you can mention functions and/or variables that dont exist when defining something as long as you define the missing things before you invoke anything that uses them. Do you think its fair to say that modern LISPs scope lexically until the top level, at which time scoping becomes dynamic?
Recursion via lazy evaluation?
In trying to answer the open-ended question of what other ways recursion can be expressed other than Y, assignment, and passing the needed function as an argument, one might be tempted to think lazy streams might help.
Consider for instance this lazy stream [Abelson et al., 1985].
>>> (define ones (cons 1 (lambda () ones)))
>>> (car ones)
>>> (car (force-a-thunk (cdr ones)))
(car (force-a-thunk (cdr (force-a-thunk (cdr ones)))))
How about if we went back to the letrec via let plus set! example from the previous article, and thunkified the recursive call to try to get the same sort of behavior that the ones example exhibits. Will it work?
No, because the reason the ones example works is not entirely due to lazy evaluation. It relies on peculiarities of the top level. This can be shown more clearly by converting the ones example into a local (rather than global) piece of code and changing names (lest we inadvertently grab something that already exists).
>>> (let ((my-ones (cons 1 (lambda () my-ones))))
(car (force-a-thunk (cdr my-ones))))
ERROR: Undefined global variable
Entering debugger. Enter ? for help.
Notice how applicative-order-y assumes that the function its going to be applied to is a function of one argument. What happens if we want to use Y to get rid recursion in functions that have more than one argument?
This is no problem--currying takes care of this, because all combinators only have one argument anyway. If there are additional arguments, they cant be seen until one argument is consumed, at which time one more parameter will be ready to take an argument. In this way, they are like retractable claws! They stay out of the way until they are needed.
If we didnt use currying we would have to have a different version of applicative-order-y for each arity we want to handle (i.e. one version for functions of one argument, another version for functions of two arguments, etc.).
(Another approach would be to make use of a variable argument mechanism as is present in Common LISP but that violates our draconian adherence to combinators and perhaps seems less uniform, elegant, and simple.)
Spatial considerations of Church numerals
An observation: Church numerals can take a lot of space if representing n requires n function calls.
Instead, we could find out what the prime factors of n are, then make a Church numeral from multiplying those primes together, with exponentiated factors exponentiated!
Plan: write a function primes that computes the prime factors of a number, then use it to make more space efficient Church numerals. (See file primes.sch.)
>>> (primes 32769)
(331 11 (3 . 2))
In the above example, we find 32,769 consists of prime factors 331, 11, and 3 to the 2nd power. Thus we could make the Church numeral for 32,769 by multiplying 331, 11 and the exponentiation of 3 to the 2nd power. The number of nested function calls will now be 331 + 11 + 3 + 2 = 347. Contrast this with 32,769 nested function calls.
To make these type of Church numerals, use the function number->compact-church (see file primes.sch).
Speeding up Exponentiation of Regular Numbers
An aside: Exponentiation can be made more efficient for regular numbers (in a way thats vaguely reminiscent of Church numeral composition). Since 2^4 is really 2^2 * 2^2, we can compute 2^2 and multiply that result together, and likewise, 2^2 is really 2^1 * 2^1 , etc. [Abelson et al., 1985], [Knuth, 1981]. This saves having to do a lot of recomputations. For a number to an odd power, we do: m * m^(n - 1). For example, 2^5, that is nothing more than 2 * 2^4. So, in the general case of m^n, rather than multiplying m together n - 1 times, we can use a divide and conquer algorithm, whereby we divide n in half until n becomes 0, then multiply all the intermediate results together. (See function pow in file primes.sch.)
(Illustration: the divide and conquer strategy creates a tree, but only concerns itself with the encircled parts. When the recursion bottoms out, the multiplications begin; squaring 2^1 to give 2^2 which then gets squared to give 2^4.)
How many multiplications will the divide and conquer strategy require? The mathematical function that answers the question: How many times can you divide a number by 2 is the log function wherein the base of the log is what youre dividing by. So, for us it would be log to the base 2 of n: log2n . In the below example, the number of multiplications will be log2200. On a calculator that has an ln key but no log key, this can be computed using natural logarithms (logs to the base e) as follows: ln(n) / ln(base) = ln(200) / ln(2) = 7.x. Contrast that with 199 multiplications!
>>> (pow 2 200)
Temporal considerations of Church numerals
Whereas regular numbers do computations completely when an operation is requested, many Church numeral arithmetic operations are lazy in that they do as little work as possible until being forced into completing the job (i.e. when unraveling them). Specifically, they return a function which promises to do the computation should the numeral ever get unraveled.
Functions that dont do any computation at computation time are efficient computers (herein, computer refers to a numerals behavior at computation time), and such functions take roughly the same amount of time to compute results, regardless of the size of the numerical arguments (which are simply pointer references within a closure).
The downside to this way of doing things is that Church numerals can be gluttons to unravel. And when we convert Church numerals to regular numbers, we are forced to unravel them. How efficiently they unravel can depend on how they were built--not all arithmetic routines build equally efficient numerals. (See file stuff.sch for the pathologically built: slow-three. Note that by calling normalize-church, any arbitrarily inefficient Church numeral can be converted to one of the same efficiency as those built by number->church. normalize-church simply unravels using com-succ as the function and com-zero as the argument. For all stats tests in file stuff.sch, the numerals used were created manually so that the statistics wouldnt be thrown off. )
Perhaps the most interesting thing we can do with Church numerals is to unravel them, and thats what takes the most time. So, lets make a distinction between unraveling that takes place at computation time and unraveling that takes place at other times.
Herein, an unraveler or unraveling <function> will refer to a function or numeral that unravels at computation time, and an unraveller will refer to a numerals unraveling behavior at other than computation time.
com-pred is unraveler. At computation time, it conses (creates tuples, which require space) while unraveling. It conses n times where n is the value the Church numeral represents. It returns a result that is a straight unraveller (a straight unraveller is a numeral thats built like number->church builds them). Note that straight unravellers arent as efficient as they would be if we had defined them manually (as is done near the bottom of the file stuff.sch). A straight unraveller will unravel n + 1 times. (One unravel is done for com-zero.)
com-sub is an efficient computer, however it is not being burdened with checking that m < n doesnt happen when subtracting m - n. If it had to check for that case, it would probably be calling an unraveling predicate which would then make com-sub into an inefficient computer.
com-sub is an inefficient unraveller because it conses and has embedded unravellers. The consing in com-sub for m - n can be described by the expression:
The consing in com-quo (the quotient function) can be described by the expression: YUK! because com-quo compounds the inefficiencies of com-sub (which in turn compounds the inefficiencies of com-pred).
com-quo is an inefficient computer and unraveller. At computation time, it calls com-<? which makes use of the unraveling predicate com-zero? . At unraveling time, it conses and does embedded unravelings.
com-add is an efficient computer. For m + n, the result will consist of a straight unraveller for n and two unravelings for m. I.e. there will be (n + 1) + 2 unravelings.
com-mul is an efficient computer. As an unraveller, it results in 1 unravel and 2 partial unravels.
com-pow is almost an efficient computer. At computation time it does a partial unravel. As an unraveller, it does 1 unravel.
Notice that com-pow, com-mul, and com-add all basically do different forms of function composition. The definition of com-mul is simpler than com-add, and com-pow is simpler yet: its just directly composes two Church numerals! Basically the difference between these functions is a matter of how much unraveling takes place before composition.
Perhaps if one were to do a lot of computations and very few decodings of Church numerals to regular numbers, Church math might gain some efficiencies from its lazy style of doing things. Also, normalize-church could be used to convert poor unravellers to efficient unravellers before any math is done on them (lest existing inefficiencies get compounded).
(Code note: the objects used by function stats, including function stats, are message passing objects with local state, modelled after the cons example in the recap. The message ? can be used to find out what messages an object will accept.)
Sometimes its okay to be lazy!
Heres an example of a form of laziness at work in some other numerical system. Consider the (Chinese) abacus.
There are three different ways to represent 10 (and powers of 10 beyond 1). While one could normalize every time a computation produced a result that needed to be normalized, perhaps it might be more efficient to wait until another computation forces normalization.
In fact, if one were to normalize at every chance, one might find that certain beads were pushed up and down more than once, whereas they might have been moved around only once if one werent so anxious to normalize. Sometimes the normalizations can be subsumed into future computations. Sometimes if you dont do something right away, the need to do it may later go away. (I.e. Sometimes it pays to be lazy and only do work that is proven to be needed.)
(A trivial example: Imagine all the ones beads being pushed up, representing 5. This situation seems like having the bases loaded in baseball, so the temptation might be to normalize that by pushing all the ones beads back down and pushing up one of the fives beads. That would later enable trivially adding anything less than 5. However, if the ones beads were left pushed up, and the next number to be added was 3, we could push up a fives bead and push down 2 of the ones beads since 3 = 5 - 2. Contrast how many beads get moved around this way versus how many would have been moved around had we normalized before adding 3.
A parting observation: The abacus is so redundant that it often allows choices as to when and how to normalize, or not normalize. This allows room for personal style.)
Consumer oriented numerals
In file stuff.sch, (what are herein called) consumer numerals are introduced. Essentially, consumer numerals are a more formalized way of thinking about n-consumers (which might be a déjà vu for the reader as they were actually used in the last article to describe how com-zero? works).
Consumer numerals are not nearly as verb oriented as Church numerals (which are strongly function oriented as can be seen by the number of function invocations (fireworks) that can be set in motion when a numeral is invoked--consider the case of the powerful composition at work in com-pow for example). At the same time consumer numerals arent as noun oriented as list numerals (wherein n is represented by a list of n elements--list numerals are a treatment of numbers as passive data objects).
For consumer numerals, abstraction becomes the successor function and function application becomes the predecessor function. n is represented in terms of how many arguments the numeral is capable of consuming.
Odds and Ends
Note the following behavior:
>>> (com-null? a)
ERROR: Bad procedure
Entering debugger. Enter ? for help.
Is this due to evaluation differences between Scheme (applicative order) and l-calculus (normal order) like we encountered in implementing com-if ? Or, is com-null? designed to only accept com-nil or a tuple made by com-cons ?
Exercise: Compare and contrast the Révész version of com-add (see file combinators.sch) with the vanMeule version (which was designed in the style of com-sub). The unraveling in the Révész com-add should be made explicit via calls to unravel and/or partial-unravel if you want to use stats to analyze its performance. Could com-mul be written in terms of com-add, and com-pow in terms of com-mul ? (The question is really one of making use of an operator that wants two args versus a unary operator like com-succ.)
Exercise from last article: making thunks tow the line. If we want thunks to have one argument (as the rest of our combinators have) we can do the following.
To thunkify an expression <E> wrap a lambda of one argument around it like this: (lambda (x) <E>) where x does not free occur in <E>. I.e. x must not be a free variable in <E> lest we inadvertently snag the x of the thunk when <E> gets evaluated.
To force a thunk: notice that our new thunks are like consumer oriented numerals. We therefore want to take the predecessor, which is to say we want to force it by applying it to <anything>.
In file combinators.sch the reader will find a goodly number of combinators defined.
The reader can now write sorting routines, mapping functions, and various other list oriented functions.
It might be interesting at this point to implement a metacircular interpreter using combinators (an evaluator written in the same language it evaluates is said to be metacircular [Abelson et al., 1985]). This could serve various purposes such as allowing us to:
1) See exactly what is required. Do we currently have everything we need to implement LISP? (How would we handle state?) How about a l-calculus interpreter?
2) Explore language and implementation issues--the restrictiveness of combinators might enable issues to come to light that might otherwise get missed.
3) Be able to see the guts of combinators! Unfortunately, the trend these days is towards abolishing interpreters. (MacScheme compiles everything when you press <enter>.) Using a metacircular interpreter to run our combinators, we could then see what they look like internally, for instance, it would be nice if identity evaluated to: (closed-lambda (x) x) . In Scheme it evaluates to #<PROCEDURE identity> because it gets compiled to some unprintable form.
4) Metacircular interpreters are useful for prototyping new languages--once weve got one we might want to tweak it to evaluate in normal order instead of applicative order. Thereafter, a tweak could be made to allow partial evaluation in order to evolve our metacircular interpreter closer to a l-calculus interpreter.
All these things are beyond the game plan for this article--I merely wanted to suggest possible directions that could be pursued.
Henry Baker for donating the title of this article (full well realizing that no good deed goes unpunished), and for showing me the (fact fact ...) trick.
Verbosity buster John Koerber for valiant efforts against the Department of Redundancy Department.
The jacuzzi in which this article was conjured.
Bugs/infelicities due to chlorine vapors.
Bibliography and References
[Abelson et al, 1985] Harold Abelson and Gerald Jay Sussman with Julie Sussman. Structure and Interpretation of Computer Programs. MIT Press, Cambridge, Massachusetts, USA, 1985.
[Dertouzos et al., 1974] Michael L. Dertouzos, .Stepehn A. Ward, Joseph Weizenbaum. Course 6.031 Structure and Interpretation of Computer Languages. MIT, Cambridge, MA 1974-75.
[Field et al., 1989] Anthony J. Field and Peter G. Harrison. Functional Programming. Addison-Wesley Publishing Company. First printed 1988. Reprinted 1989.
[Gabriel, 1988] Richard P. Gabriel. The Why of Y. LISP Pointers, vol. 2, no. 2 October-November-December, 1988.
[Katz, 1988] Morry J. Katz. Katzs notes from Information Sciences Function sponsored lecture series on Computer Science. Rockwell International Science Center, Thousand Oaks, CA, March, 1988.
[Knuth, 1981] Donald E. Knuth. The Art of Computer Programming, second edition, vol. 2, Seminumerical Algorithms. Addison-Wesley Publishing Company, 1981.
[Michaelson, 1989] Greg Michaelson. An Introduction to Functional Programming through Lambda Calculus. Addison-Wesley Publishing Company, 1989.
[Peyton Jones, 1987] Simon L. Peyton Jones. The Implementation of Functional Programming Languages. Prentice-Hall International, 1987.
[Queinnec, 1990] Christian Queinnec. A Subjective view of Lisp. LISP Pointers; Volume 3, Number 1. ACM, NY, July 1989-March 1990.
[Ramsdell, 1986] John D. Ramsdell. The Curry Chip. In Proceedings of the 1986 ACM Conference on LISP and Functional Programming. Cambridge, MA, August 4-6, 1986.
[Rees et al, 1986] Jonathan Rees and William Clinger (editors). Revised3 Report on the Algorithmic Language Scheme; AI Memo 848a. MIT Artificial Intelligence Laboratory, Cambridge, Massachusetts, USA, September 1986.
[Révész, 1988] György E. Révész. Lambda-Calculus, Combinators, and Functional Programming. Cambridge University Press, Cambridge, England, 1988.
. . .
The Scheme combinators presented herein were derived from l-calculus versions. Except for Y and others where noted, the l-calculus versions are from [Révész, 1988].
MacScheme is put out by Lightship Software, P.O. Box 1636, Beaverton, OR 97075 USA. Phone: (503) 643-6909.
. . .
André can be reach on the Internet:
; File: combinators.sch. (Eval first.)
; Projection functions.
(define identity ; project-1st-of-1
(lambda (x) x))
(define 3-consumer project-3rd-of-3)
; Booleans and conditionals.
(define force-a-thunk ; used by com-if and others.
(force-a-thunk ((condition then) else))))))
(define com-not ; [Michaelson, 1989]
(lambda () com-false))
(lambda () com-true))))
(define com-and ; [Field, 1989]
((x y) com-false))))
(define com-or ; [Field, 1989]
((x com-true) y))))
; List primitives.
((selector x) y)))))
(define com-nil ; project-2nd-of-3
(lambda (x) ; [Field, 1989]
(define com-null? ; [Field, 1989]
(tuple (lambda (head)
; Y combinator.
((lambda (x) (f (lambda (arg) ((x x) arg))))
(lambda (x) (f (lambda (arg) ((x x) arg)))))))
; The Mother of All Church numerals.
; Church numeral predicates.
(((unravel n) 3-consumer) com-true)))
(define com-even? ; [Révész, 1988] (not in book)
(((unravel n) com-not) com-true)))
(define com-odd? ; [Révész, 1988] (not in book)
(((unravel n) com-not) com-false)))
(define com-<? ; [vanMeule]
(((com-if (com-zero? x))
(((com-if (com-zero? y))
(lambda () com-false))
(lambda () com-true))))
(((com-if (com-zero? y))
(lambda () com-false))
(lambda () ((less-than? (com-pred x))
; Church numeral operators.
(f (((unravel n) f) x))))))
(define make-ascending-tuple ; part of pred
(com-succ (com-cdr tuple)))))
(define initial-pred-tuple ; part of pred
((com-cons "com-pred called on 0")
(define com-add ; Révész version [Révész, 1988]
((n f) x)))))))
(define com-add ; [vanMeule]
(((unravel (((unravel n) com-succ) m))
(define com-sub ; [vanMeule]
(((unravel (((unravel n) com-pred) m))
((partial-unravel n) f))))))
(define com-quo ; [vanMeule]
(((com-if ((com-<? dividend) divisor))
(lambda () com-zero))
(com-succ ((the-quo ((com-sub dividend)
(define com-rem) ; Reader defines remainder.
(define com-pow ; [Katz, 1988]
((partial-unravel n) m))))
; Church numeral utility functions.
(define number->church ; make-church-numeral
(if (zero? n)
(number->church (- n 1))))))
((n f) x)))))
(define church->number ; dechurchify-numeral
(((unravel church-numeral) 1+) 0)))