December 92 - THE VETERAN NEOPHYTE
THE VETERAN NEOPHYTE
DIGITAL ZOOLOGY
DAVE JOHNSON
4 A. M. Friday still feels like Thursday. Five hours remain until the contest. Bean dip slowly dries
around the rim of a jar, turning a darker, almost translucent brown. This corner of the table, the one
nearest the center of the room, is littered with the strange and particular combination of plastic,
paper, metal, glass, and organic debris that typifies the remains of junk food. The room, a large but
nondescript meeting room with beige-painted cinder block walls, is bathed in fluorescent light, 60-
cycle radiation painting the few remaining occupants a lovely whitish green.
A few of them still hunch over keyboards, pecking feverishly, squeezing the last few desperate
instructions into their robots. Others sprawl on the floor around the test course, watching carefully
and hopefully as their fragile creations, their little Lego and wire and motor golems, their tiny mind
children, haltingly -- but autonomously -- negotiate their way toward the goal. The expressions on
their faces are variously rapt, worried, and proud.
The scene is the early morning of the last day of Artificial Life III, a week-long scientific hoe-down
that took place last June in Santa Fe. The hardy hackers in the cluttered room at the back of the
building are entrants in a robot-building contest that will be run as part of the "Artificial 4H Show"
beginning at 9A. M. Their robot creatures run the gamut from the eminently practical to the
practically insane.
The insane ones, of course, are by far the more interesting. One, appropriately named Rob Quixote,
has only a single wheel, and therefore must steer by rotating an oversized horizontal windmill-like
contraption fastened to its head, effectively pushing against the air to turn itself. Another moves by a
sort of spastic lurching; throwing its entire front section forward, it gains an awkward quarter inch,
then gathers up its hindquarters for another throw. This one is so inefficient that it requires twice the
usual number of batteries, and uses them up in a single run. Amazingly enough, though, it
successfully traverses the course, albeit slowly and with much ineffectual thrashing.
"Artificial life," as a named discipline, appeared on the scientific scene relatively recently. The first
conference happened in the fall of 1987, and gave joyous birth to this new field of scientific inquiry,
or rather this new and rich confluence of many different fields. Scientists who had been working in
isolation suddenly discovered others pursuing similar lines of investigation, and the meeting of minds
was electric.
Artificial life is an attempt to create and study artificial systems -- that is, systems created by humans
-- that mimic processes or exhibit behaviors usually associated only with living systems. Predictably,
the primary medium that these systems are created on (in?) is computers; this is a field that depends
heavily on technology to get its work done (they're doomed if electricity ever becomes unavailable).
Also predictably, a large proportion of its devotees are biologists, especially theoretical biologists.
Why would biologists want to study artificial life? Don't they already have their hands full trying to
figure out the real thing? Well, for one thing, there are a lot of experiments biologists would love to
do that they simply can't: nature doesn't come with convenient levers and knobs, and you can never
roll back time and try something over again. So if biologists can develop good models of biological
phenomena, they can implement them on computers and run clean and tidy experiments that are
repeatable, detailed, controlled, and manipulable down to the last detail. This is a far cry from the
messy, inexact, unrepeatable real world, and for some biologists would be tantamount to scientific
nirvana.
But there's another, larger reason for biologists to study artificial life. In the words of Chris Langton,
self-described "midwife" of artificial life (he organized the first conferences and named the field),
"Such systems can help us expand our understanding of life as itcould be. By allowing us to view the
life that has evolved here on Earth in the larger context ofpossible life, we may begin to derive a trulygeneral theoretical biology capable of making universal statements about life wherever it may be
found and whatever it may be
made of."
I like it.
When I read this I was hooked. Visions of bizarre, unknowable alien intelligences and strange,
seething soups that cling and quiver and creep around filled my head. And here are real scientists
hanging around seriously discussing it! This is some serious fun! And lots of different kinds of
scientists are paying attention; biologists, mathematicians, physicists, chemists, robotocists, and
computerists are all well represented at the conferences, with a sprinkling of philosophers,
anthropologists, economists, and others. The gee-whiz factor hooked me, but the interdisciplinary
thrust of artificial life reeled me in.
(In conversation people say "a -life." I've seen it written as Alife, A-life, alife, and a-life. I wanted to
use alife, but people tended to pronounce it like "get a life," so I'll use a-life instead.)
Another appeal for me is the tacit approval of the "build it first, then study it" approach in a-life.
This method of building things and learning things (stumbling around, really, butintelligent stumbling,directed stumbling) has always been my particular forte. The premise is that we don't need
to completely understand something before we can build it or build a model of it, and that it's very
often more instructive to get a crude version up and working immediately than to try to refine the
thing completely before trying it out. By fumbling around and building things blindly, we can often
learn a lot by virtue of the happy
accidents that inevitably occur. And it'stons more fun that way.
There were far too many interesting things at the conference to describe them all here. Instead I
want to tell you about one particular talk that caused me to have a powerful "Aha!" experience (and Ilive for "Aha!" experiences). If you know something about evolution already, the following may not
be news to you, but presumably most computer programmers don't study biology.
The talk dealt with Lamarckian evolution. Lamarck was a contemporary of Darwin who postulated
that the things experienced by an organism during its lifetime could affect the traits handed down to
the next generation. As an example, a Lamarckian might believe that proto-giraffes had to stretch
their necks up to reach the leaves at the tops of the trees, and because of all the stretching, their
descendants were born with longer necks. Unfortunately for Lamarck and his followers, this is
rubbish.
It turns out that as far as biological evolution is concerned, Lamarckism is nonexistent: there was no
such thing at work in the development of life on Earth. So my curiosity was piqued when I saw the
title of this talk by David Ackley and Michael Littman: "A Case for Distributed Lamarckian
Evolution." What, were they crazy? Talking Lamarck to all these modern scientists? (At the previous
conference, Ackley had one of the few really amusing presentations, so of course I would have gone
no matter what the topic, but this one looked particularly juicy.)
Ackley and Littman weren't trying to convince people that Lamarckian evolution had anything to do
with life on earth. What they did instead was compare the efficiencies of the two types of evolution.
(They created a simple evolution simulation, and then compared Darwinian and Lamarckian
evolution in their abilities to find a solution to a particular problem.) Hey, this is after allartificial life, so if Lamarckian evolution works better, we can use it, right?
What they found was that when Lamarckian evolution was allowed to enter the picture -- when the
things learned in one generation were at least partially passed on to the next -- the system was much,
much better at solving the given problem. It consistently found better solutions faster in every single
case they tried. This of course makes some intuitive sense. Rather than waiting for genetic shuffling
to find a solution to the problem, the prior generation can point the current one in the right
direction. So Lamarckian evolution is pretty much a great thing, evolutionarily speaking, because it
gets you a lot further and it gets you there a lot faster. (Where it is exactly that you're going is a
question for the philosophers; for the moment, let's just blithely assume that we reallydo want to getthere.) Their point was that as simulation builders we should think about using Lamarckian
inheritance in our simulations, because it works so well. But this point reinforced something else that
had been rolling around in my head.
There's an evolutionary premise that I initially learned about through reading an article by a
robotocist named Hans Moravec in the first Artificial Life proceedings. I learned more about it in
Richard Dawkins's bookThe Blind Watchmaker and in a fascinating book calledSeven Clues to the
Origin of Life by a Glasgow chemist named Graham Cairns-Smith. This particular concept is called
"genetic takeover."
According to this idea, one substance can gradually replace another as the carrier of genetic
information. Cairns-Smith postulates that life began with replicating inorganic crystals -- clays, as a
matter of fact -- and that a genetic takeover gradually occurred, with proteins and nucleic acids
gaining in dominance until finally the original materials were no longer needed. Dawkins and
Moravec (and many others) think that a genetic takeover is occurring now, with human culture
taking over from nucleic acids as the evolving entity, though they differ in their candidates for the
new "gene-equivalent."
Dawkins likes to speak about the "meme," a very useful term first coined in his bookThe Selfish Gene .
A meme is an idea, really, or a piece of information. It is immaterial, and requires a material substrate
of brains, books, computers, or other media to exist. But given that substrate, the parallels with genes
are very good. Just like genes, memes replicate (we tell each other good ideas, or write them down
for others), memes mutate (we don't always get it right in the telling), memes mate (ideas in
combination often give birth to new ones), and memes compete for survival ("good" ideas stick
around a long, long time, but "bad" ones die by not being passed on to anyone: mindshare is their
means of existence).
Moravec, on the other hand, seems to be more interested in the evolution of machines, and
speculates convincingly and entertainingly that our machines, our artifacts, will eventually become
the dominant evolving entities on Earth. Science fiction, or science fact? I don't know -- there are
compelling arguments both ways -- but in either case it makes for very good reading.
In any case, they think that perhaps here on Earth biological evolution is thoroughly obsolete, and
almost despite myself I have to agree. Sure, it's still operating, but the evolution of human bodies has
been completely outstripped by the evolution of human culture. Bodies evolve at an extremely slow
pace, but culture evolves incredibly fast, and humans are having such a profound impact on the Earth
that biology simply can't keep up. Look at the changes on Earth in the last millennium. Most of the
species alive a thousand years ago have remained physically about the same, yet there's no question
that the Earth has undergone a radical transformation, and primarily at the hands of humans, as a by-
product of their culture. (You might hesitate to call the rampant, wanton destruction and boundless
consumption of resources that Earth has suffered at the hands of humans "evolution," but remember
that the word "evolution" doesnot necessarily imply improvement.) But why is it going so fast? How
come humans do this and other species don't?
One of the primary distinctions between human beings and their close animal relatives is language.
Humans can communicate with abstract symbols, and their communications can be "fossilized" in
time (that is, written down for later). Here comes the "Aha!" we've all been waiting for: this ability
allows humans to engage in a form of Lamarckian evolution! The things we learn in our lifetimescan
be passed on to the next generation, though in a filtered sort of way. We can't change the way our
offspring are built, but wecan change their behavior (teenagers notwithstanding). Other species do
this to some extent, but humans are the unquestioned champs at shaping their offspring.
As you can see, a-life -- just like life itself -- is rife with philosophical conundrums and radical,
thought-provoking concepts, and that's much of the reason I stay interested. But probably the biggest
reason of all that I like a-life is hard to express, except by analogy: I get the same feeling peering
through a glass screen into a computer world full of digital critters that I do peering through the bars
of a cage at the zoo. The xenophile in me wants to see all the forms that life can take, and get to
know the minds of every other being. I want to puzzle out the motivations behind a critter's
behavior, and I love that shock of recognition I experience every time I look into an animal's eyes --
even the ones that are so alien, like birds and reptiles and fish. Again, it's this feeling that there are
universal properties of life waiting to be discovered, properties that apply not only to life as it has
evolved on Earth but to allpossible life, including the digital variety.
Are any of these a-life explorations really alive? That's an energetic and ongoing debate among a-
lifers, of course, and the answer ultimately depends on the definition you pick for the word "life."
Rather than arguing whether metabolism is more necessary to life than reproduction, though, I like
to duck the definition issue. I don't really care too much whether wecall them alive, I want to see if
people react to themas if they're alive. I want to see that shock of recognition occur when people and
digital organisms collide. (What if "they" recognize "us"?!) It's sort of the Turing Test approach for
life: if it seems alive -- if people can't tell that it'snot alive -- then no matter what we call it, people
will treat it as if it's alive. That I'd like to see.
RECOMMENDED READING
- Artificial Life by Steven Levy (Pantheon Books, 1992).
- The Blind Watchmaker by Richard Dawkins
(W. W. Norton & Company, 1987).
- The Selfish Gene by Richard Dawkins (Oxford University Press, 1976).
- Seven Clues to the Origin of Life by A. G. Cairns-Smith (Cambridge University Press, 1985).
- ZOTZ! by Walter Karig (Rinehart & Company, Inc., 1947).
DAVE JOHNSON's mother recently moved across the country, and sent him a total of eight large cardboard boxes
crammed with junk spanning his entire life that she didn't want cluttering her garage any more. Among his old school stuff
was a report card from second grade that included a couple of N's, meaning "needs improvement." The N's were in the
categories of "Is Prompt" and "Works Steadily." Here's a quote from his teacher, Mrs. Doris Short, that accompanied the
report: "We've talked about being prompt, but it's always 'I'll finish tomorrow.'" This is strong evidence for the claim that
personality is established early in life, and never changes. *
Dave welcomes feedback on his musings. He can be reached at JOHNSON.DK on AppleLink, dkj@apple.com on the
Internet, or 75300,715 on CompuServe.*