Artificial life is the term applied collectively to attempts being made to develop mathematical models and computer simulations of the ways in which living organisms develop, grow, and evolve. Researchers in this maturing field hope to gain deeper insights into the nature of organic life as well as into the further possibilities of computer science and robotics.
Numerous techniques of numerous lives are also being used to fathom the origins and chemical processes of metabolism. Some investigators have even proposed that some digital “life” in computers might already be considered a real life for. The term artificial life was coined in the 1980s by Christopher Langdon, a computer scientist at Los Alamos National Laboratory and the Santa Fe Institute.
Langdon organized the first experimental workshop on the subject at Santa Fe in 1987. Since then other a-life conferences have taken place, drawing increasingly wider attention and a growing number of participants. Theoretical studies of a-life however, had been in progress long before the 1980s. Most notably, the Hungarian-born U.S. mathematician John Von Neumann, one explores the nature of very basic a-life formats called cellular automata in the 1950s. Cellular automata are imaginary mathematical “cells” –analogous to checkerboard squares—that can be made to simulate physical processes by subjecting them to certain simple rules called algorithms. Before his death, Von Neumann had developed a set of algorithms by which a cellular automaton—a box shape with a very long tail—could “reproduce” itself.
Another significant contribution in the field of artificial life was that of Dutch biologist Aristid Lindenmeyer. Interested in the mathematics of plant growth, Lindenmeyer found in the 1960s that through the use of a few basic algorithms—now called Lindenmeyer systems or L-systems—he could model biochemical processes as well as tracing the development of complex biological forms such as flowers. Computer-graphics programs now make use of L-systems to yield realistic three-dimensional images of plants. The significance of Lindenmeyer’s contribution is evident in the fact that so-called “genetic algorithms” are now basic to research into a-life as well as many other areas of interest.
ADVERTISEMENTS:
Genetic algorithms, first described by computer scientist John Holland of the University of Michigan in the 1970s, are comparable to L-systems. A computer worker trying to answer some question about a-life sets up system—an algorithm—by which the computer itself rapidly grades the multiple possible answers that it has produced to the question.
The most successful of the solutions, are then used to develop new software that yields further solutions, and the cycle is repeated through several “generations” of answers. Langdon himself picked up on the work of Von Neumann by attempting to design an “a-life” form on a computer screen. In 1979 he finally succeeded in developing loop-shaped objects that actually reproduced themselves, over and over again.
As new generations spread outward from the initial “organisms” they left “dead” generations inside the expanding parameter. Langdon noted that the “behavior” of these a-life forms genuinely mimicked real-life processes of mutation and evolution. He eventually proposed that a-life studies could provide keys to understanding the logical form of any living systems, known or unknown. One of the most striking a-life simulations of evolutionary processes has been the work of Thomas Ray of the University of Delaware, who in 1990 set in motion a “world” of computer programs that he called Tierra. The world started out with a single ancestor, a program containing 80 instructions.
ADVERTISEMENTS:
Artificial-life evolution progresses as mutations rapidly appeared. The new forms included “parasites” that interacted with the original host forms, producing further mutations of hosts and parasites that “learned” to deal with one another anew in each succeeding generation.