"Life is an information process" (John von Neumann).
"Life happens on a virtual checkerboard. The
squares are called cells" (John Conway).
"We want to create life in the computer and not
in test tubes" (Christopher Langton)
"Life is an evolving software" (G. Chaitin).
Artificial Life
Artificial life −abbreviated, AL or ALIFE− is a relatively new field that attempts to simulate biological processes using computer (computational) methods. AL is the study of life in the digital environment.
According to Christopher Langton −the first to use the term "artificial life"− AL is "the field of study devoted to understanding life, attempting to abstract the fundamental dynamic principles underlying biological phenomena, and recreating those dynamics in other physical media such as computers by making them accessible to new kinds of experimental manipulation and testing."
The AL was born out of the need for a theoretical, computational type of biology based on universal laws. Its objectives are:
To determine the characteristics that define living processes, independently of the substrate, of the materiality in which they take place. It includes the question of whether life is dependent on any particular medium or matter.
To investigate the mechanisms that make "the living" emerge as a phenomenon not reducible to its components.
To simulate by computer the autonomous behavior of living beings. An artificial living being encapsulates a paradigm: a worldview, senses, beliefs and goals.
Exploring alternative forms of life. According to Langton, "AL is about placing life 'as it is' within the context of life 'as it could be.'"
Managing the time of virtual organisms. The difference between a living organism and a virtual one is the ability to manipulate time: it can be sped up or slowed down.
Practice synthetic biology from components, in analogy with synthetic chemistry.
Cellular automata
AL was born with cellular automata (CAs). An CA is a type of finite automaton consisting of:
A two-dimensional grid of square squares (checkerboard or chessboard type) representing the "biological space" and where a black square represents a living cell and a white square represents a dead cell or a lifeless space. This is generalizable to more than two states.
A set of simple rules that state how the system will evolve, i.e., whether the cells will remain in the same state or change state.
The system evolves in a discrete, discontinuous manner, where all the boxes of each generation are calculated at the same time, synchronously.
The fact that simple rules can produce extraordinarily complex behavior makes CAs a possible theoretical foundation for life and, by extension, for intelligence. Indeed, CAs make it possible to simulate the functions of: interaction with the environment, evolution, as well as coexistence and competitiveness of various life forms sharing the same environment.
CAs were devised by Stanislaw Ulam, but it was John von Neumann who pushed them forward as a computational model by using them to implement a theoretical machine capable of self-replication. Von Neumann is considered "the father" of AL.
Von Neumann called self-reproducing machines "Universal Assemblers". One of the concepts he defined was the "universal constructor", a device capable of constructing another identical machine from its own structural and constructive description, a mechanism also possessed by cells, which contain not only information about their structure, but also about how to construct other cells of the same type. At first, von Neumann's self-reproducing machine was considered an impossible task, on the grounds that such a machine should contain a description of itself, and that this description would also need a description, and so on ad infinitum.
In lectures he gave at Princeton in 1948, von Neumann said that a self-reproducing machine would have to have at least 8 different kinds of parts: 4 for the brain and 4 for the motor part. His machine came to have 29 states per cell (states identified according to their functionality) and a single transition rule, but was never finalized due to his untimely death. Since then several attempts have been made, without success, to complete his work.
Von Neumann proved that a machine can create another machine more complicated than itself. There can be generations of machines, each more complex than its predecessors. This is the same thing that happens in biological evolution. Von Neumann showed that even the complexity of living organisms can be reduced or compacted into a relatively simple set of recursive rules. Self-reproduction can be understood as the consequence of a simplified "physics" based on a cellular automaton. According to von Neumann, biological reproduction is mechanistic and a definition of life can be formulated in terms of information.
According to Ed Fredkin: The universe is really a gigantic computer, specifically a gigantic CA. CAs are, in essence, worlds not unlike our own. There is an information process that underlies everything. "If something cannot be done in a computer, then we cannot do it by a purely mechanical process." And the living organisms that inhabit this universe operate on the same principles as computational processes.
The game of life
The "game of life" was invented by John Conway in 1970 when he was trying to drastically simplify the rules and states of von Neumann's self-reproducing machine. It is a two-dimensional cellular automaton that represents the paradigm of dynamic complexity from the simple: three very simple rules applied successively (to an initial configuration of black cells) can produce extremely complex dynamic results. The rules are as follows:
If a black cell has 2 or 3 black neighbors, it remains black (survives).
If a white cell has 3 black neighbors, it becomes black (revives).
In all other cases, the cell remains white (if white) or becomes white (if black).
Rule
Before
After
1. A living cell with 2 or 3 living neighbor cells survives.
2. A dead cell with 3 living neighboring cells revives.
3a. A living cell with less than 2 living neighboring cells dies.
3b. A living cell with more than 3 living neighboring cells dies.
Features:
The game is played on a virtual infinite board.
It is a single player game. It intervenes only at the beginning of the game, when the initial form is established. From that moment on, the game is automatic, since its evolution is determined by the initial shape.
Black cells represent living cells and white cells represent dead cells. Each cell has 8 neighboring cells.
The initial form evolves in discrete times, where all cells are updated at the same time.
Global emergent and self-organizing properties induced by local dynamics appear.
Initial forms evolve in surprising ways. Most simple forms evolve into stable forms. Other forms evolve to oscillating periodic configurations. Some simple initial forms have very complex lifetimes, with the "pentomino r" being the most representative example: this form is a cluster of contiguous cells reminiscent of the letter "r", which stabilizes after 1103 generations.
Pentomino-r
It allows to emulate the universal Turing machine, that is, everything that a Turing machine can compute can be computed by the game of life. It can also emulate any computational model. An abstract computer can be built with patterns from the game of life.
The game of life is the best example of cellular automata and the paradigmatic application of computational biological processes. It contributed decisively in spreading the concept of CA, becoming a source of inspiration for researchers. Due to the simplicity of the rules and the enormous number of dynamic forms it can generate, the game aroused great interest and became popular almost instantly, even becoming a cult object during the 1970s and later. It was easy to experiment with using the personal computer, you could see on the screen how an artificial "living being" evolved. It was also the precedent of fractals, as some shapes are reminiscent of these recursive geometric forms.
Here is an example of an oscillating configuration of period 3:
Christopher Langton was captivated by Conway's Game of Life. From the moment he met him, he pursued the idea that it was possible to simulate living creatures on the computer. After years of study, he tried to simplify von Neumann's CA from 29 states to just 8. In 1979 he achieved the first self-reproducing computational organism using a personal computer.
Wolfram's vision
Wolfram showed in his book "A New Kind of Science" that CAs allow many complex shapes to be generated from very simple deterministic rules applied recursively. That CAs make it possible to create a new kind of science for modeling physical phenomena, a simpler, more direct and visual alternative to cryptic differential equations. Wolfram explored and categorized the types of complexity that CAs produce and showed how they could model the forms of nature such as seashells and plant growth. Wolfram saw the computer as an ideal abstract territory for experimentation, and CAs as the simplest and most powerful system.
The bioforms of Richard Dawkins
Bioforms (biomorphs) are digital creatures produced by computer by Richard Dawkins, who describes them in chapter 3 of his 1986 book "The Blind Watchmaker" [1993], a text popularizing neo-Darwinism, of which he is an advocate. Dawkins wrote the program (which he called "Evolution") to illustrate how complex meaningful bioforms can be created without a designer. Bioforms constitute a bridge between neo-Darwinism and AL.
A bioform is a 2D, tree-like graph, generated by a recursive algorithm, that reproduces and mutates like a living thing. Initially, the graph is very simple, but after a number of steps (recursions) it can become a very complex graph. The initial parent bioform is a vertical line segment. From this segment other segments branch off, and from these other segments, and so on until all the intended recursions are completed.
A bioform has 9 genes (numbered from 1 to 9). Each gene represents an aspect or parameter of the bioform. It can take an integer value between −9 and +9. Dawkins did not elaborate on the parameters, but gave some clues such as the number of branches, length of each branch and the branching angle. The last gene (the 9°) is the depth of recursion. The genes in the bioform reflect the external appearance (the phenotype) and its evolution, analogous to what happens with biological genes. Since there are 9 genes and 19 possible values for each gene, the number of different possible bioforms is 1919. In Appendix I of his book, Dawkins presented an extension of the algorithm with 16 genes instead of 9.
The evolutionary process of a bioform is as follows:
It starts from an initial parent bioform based on random gene values. It appears in the center of a matrix of 3×3 elements, i.e. it is surrounded by 8 cells. In the next generation, 8 mutations of the parent bioform appear in these 8 boxes, each with only one gene modified in one unit (positive or negative). The mutated genes are randomly selected, as well as the sign of the unit increment. Each of the daughter forms are not identical to the parent, but bear a resemblance, since only one gene in each is mutated. The mutations are said to be "local".
A bioform and its 8 descendants.
Of the 8 descendant bioforms, the human controller decides which descendant survives, according to some criteria (aesthetic, resemblance to a desired form, etc.). The human controller thus plays the role of nature, becoming a "breeder".
The surviving offspring becomes the new parent bioform of the next generation. It will occupy the center of the matrix and generate 8 mutations. And so on until the expected number of recursions is reached. The final appearance (phenotype) emerges from the evolutionary chain and the accumulated changes of the "genetic code" (the sequence of gene values) of the bioform, and can become very complex. Each gene in a bioform contributes to one aspect and is present at all levels of the pattern.
By selecting in each generation the form that is closest to what is desired, anything can be achieved.
When Dawkins ran his bioform evolutionary program, he was amazed at the great variety of forms that appeared, reminiscent of real-world biological forms such as insects, trees, flowers, shrubs, beetles, butterflies, spiders, frogs, scorpions, bats, etc.. But non-biological forms also appeared, such as Aztec temples, Gothic church windows, scales, goblets, and even letters of the alphabet. Then Dawkins became aware that he had created a universe of forms which he called "Biomorph Land".
Dawkins saw that, in order to construct a practical biological universe, he had to restrict the possible forms to only those that made interesting biological sense. One of the restrictions he used, for aesthetic reasons (and also to economize on the number of genes needed), was to establish a left-right symmetry, the same criterion used in the Rorschach test of inkblots.
Holy Grail and Christmas Tree.
Devil and The Queen of Spiders
In one of his forays into this universe, he happened to come upon a chalice-shaped pattern that captivated him greatly and which he named "Holy Grail" (see figure), but he failed to retrieve the sequence of changes for his generation. He offered a $1000 prize to the person who could generate it. Thomas Reed (a software engineer from California) found it and won the prize. A few weeks later two other searchers, independently, also found it.
With his bioform generation program, Dawkins intended to demonstrate:
Evolution is a gradual and cumulative process based on chance and the mechanism of selection. Chance creates variability. Selection is what makes evolution go in a certain direction of improvement. Small improvements over many generations make it impossible to recognize the original parent. Random selection never produces a coherent design, but cumulative selection according to a certain criterion does.
Evolution is fundamentally a non-random process. Chance is only the "fuel" of the process. Cumulative (and non-random) selection is the creative and driving factor in evolution.
There is no great difference between natural and artificial selection. In natural selection it is nature that decides on the basis of its adaptability to the environment. In artificial selection there is a breeder deciding which offspring survive in each generation.
The complexity found in the natural world does not need an intelligent designer. Complexity can emerge by progressive accumulation of small modifications. Darwin's postulate that complex organisms are formed gradually by small cumulative changes is thus shown.
A watch is created by a watchmaker consciously, for a purpose. But the cosmos and man has been created without any purpose, by the blind, unconscious and automatic forces of natural selection: the "blind watchmaker".
At the molecular level, organisms obey the laws of physics, but on a large scale, they do not. Life is an emergent phenomenon.
A single primordial form −the vertical segment, in the case of bioforms− may be the origin of all forms. Darwin's hypothesis that all living things are descended from a common ancestor is thus shown.
The recursive process used in the generation of bioforms also occurs in the embryos of living things. All embryos grow by cell division (one cell divides into two daughter cells) and the final form of the organism emerges as a consequence of many small local cellular changes.
Dawkins' bioforms and his conclusions have received several criticisms. The main one is on the grounds that a selection mechanism that requires a human agent cannot be Darwinian. To this Dawkins replies that the selection used is not natural but artificial. To this it is objected that the selection should not be qualified as artificial but as human.
Characteristics of AL
Research.
AL tries to describe what exactly life is; whether life is something that depends on a physical support or whether it is something independent of physical support and based only on information structures.
The interdisciplinary character that life research demands, makes AL a meeting point of many disciplines: biology, physics, philosophy, mathematics, computer science, logic, linguistics, cybernetics, general systems theory and system dynamics.
Simulation.
AL can take two forms of simulation:
Synthetic. It is based on the construction of devices made up of mechanical elements that try to simulate living systems. In this section we must include robots.
Virtual (or computational). It is based on the development of symbolic systems without mechanical support and implementable in a computer by means of information structures. This form of simulation provides an unbeatable framework for the study of life phenomena because of its simplicity, flexibility and ease of implementation, transcending the limitations of scientific observations. This second aspect is the one that interests us most here.
AL is not an empirical science in the traditional sense. Its field of experimentation is the computer, the computational world, not the natural world.
Types of AL.
A key question is whether what we generate in the computer possesses some characteristics of life. Is the computational model the medium for the study of life or is it the deep, abstract reality that underlies all biological phenomena? Do digital beings really have life?
Tom Ray −author of the program Earth− argues that he is not simulating life on a computer, but synthesizing it.
According to Heinz Pagels, "The computational model is reality, not a means to study it."
According to Ed Fredkin, "If a computer can't do something, neither can nature."
According to Christopher Langton, "We want to create life in the computer, not in test tubes."
"Strong" AL is the view that it is possible to actually create life through simulation, which is a (virtual) reality that has the same ontological status as the real world, since it is a real physical environment and using real physical resources. Langton is an advocate of strong AL: simulations of living things have life, although he acknowledges that this question is undemonstrable.
The "weak" AL denies that it is possible to create life, but that it is possible to simulate the processes of life on the computer using techniques such as CAs, evolutionary algorithms, genetic algorithms, agent-oriented programming, neural networks, fuzzy logic, and so on. Neural networks attempt to simulate the functioning of the nervous system. Genetic algorithms try to mimic natural selection by mutation and interbreeding of genetic material.
Complexity science.
AL is included in the so-called "Complexity Sciences", which studies the underlying phenomena common to all complex systems modeled by interaction of simple elements. Ilya Pigogine is considered one of the founders. The main research center for complex systems is the Santa Fe Institute (SFI), founded in 1984, one of whose founders is physicist Murray Gell-Mann (the proponent of the quark model for the atomic nucleus).
Difference with AI.
AL is a separate field of study from Artificial Intelligence (AI). AI is to psychology as AL is to biology. The goal of AI is to simulate human intelligence. The goal of AL is to simulate the behavior of living beings. However, the boundary between the two disciplines is sometimes blurred.
Difference with other disciplines.
AL is different from other more or less related disciplines:
Genetic engineering or synthetic biology, which is based on the DNA of organic matter.
Biolinguistics studies the analogies between linguistic and biological systems.
Bioinformatics is the application of computer science to the information management of biological systems.
Mathematical biology or biomathematics develops mathematical models of biological processes.
Theoretical biology aims at the conceptual characterization of biological problems and their formalization by means of mathematical models and computer simulations.
Biocybernetics. It is the science of communication and control between living organisms and their interaction with mechanical or electronic systems.
Computational biology develops theoretical computational models of biological systems.
Biocomputing or bioinformatics is computation with the help of some biological component. It is comparable to so-called "natural computing", such as computing with DNA molecules and cellular computing with membranes.
Evolutionary development biology (or EvoDevo Biology) is a discipline initiated by D'Arcy Thompson that aims to discover patterns of self-organization in organisms.
The challenges of AL
AL is a young discipline and has many challenges ahead of it, including the following:
To understand the nature of life, evolution and emerging phenomena. To discover the fundamental laws of evolution and biological complexity. To see if life is a universal concept that can be applied beyond the domain of biology. Is the cosmos a living being? In what sense can certain entities (such as society, financial markets, etc.) be said to have life? Are there degrees of life or is life discontinuous in nature?
Provide AL with a foundation by developing a unified, abstract and coherent information theory of evolutionary living systems, in the aspects of information generation, information processing and information transmission. Life should be better understood at the informational, abstract, symbolic and computational level. How are new rules (and higher order rules) and symbolic structures generated to formalize evolution?
Create a formal model for life that governs at all levels of life: bacteria, cells, organs, etc. This is the Holy Grail of AL.
Investigate the characteristics common to all evolutionary processes. And whether there are classes of evolutionary processes.
Produce and understand evolutionary processes of continuous and unlimited increasing complexity (open-ended evolution).
Investigate whether digital systems have the same potential for innovation or evolutionary creativity as living things.
Investigate how hierarchical levels of living things communicate and interact at local and non-local levels.
Investigate whether it is possible for mind and intelligence to emerge in an artificial living system.
Investigate alternative life forms, as they may help to understand the phenomenon of natural life.
Investigate the relationship between AI and AL, the intrinsic relationship between artificial living things and artificial intelligence. Is it more beneficial to study AI in the context of AL? Can AI emerge from AL?
Analogies between AI and AL
AL is closely related to AI because artificial living systems can exhibit a certain degree of intelligence. In addition, there are certain analogies between the two disciplines. In several respects they can be considered complementary disciplines.
Both are interdisciplinary in nature and make use of the same computational resources. Both promote generalization and abstraction to help understand psychology and biology, respectively.
AL tends to use low-level, bottom-up models in which global behavior emerges indirectly through computational recursion, behavior that can be difficult to predict. The elements are simple, and the rules with which the elements interact are also simple. In AL, like complex systems in general, there is no centralized control, although boundary conditions may exist.
AI uses top-down models to directly achieve desired and intended behavior (as, for example, expert systems). Complexity is programmed directly into the elements. In AI models there is usually centralized control.
AI was born at Dartmouth College in 1955. John McCarthty was the first to use the term "artificial intelligence." The AL was born at Los Alamos National Laboratory in 1987. Christopher Langton was the first to use the term "artificial life."
In both disciplines we speak of "strong" and "weak" versions.
The Turing test has an IA version and a AL version.
Turing proposed his famous test to determine whether a machine was capable of emulating the human mind. It is based on the exchange of messages between a person and a machine. The person sends meaningful messages. The machine tries to respond to those messages in a coherent way, with the help of syntactic rules. The Turing test emphasizes linguistic ability, but does not take into account the robotic, movement capabilities associated with life. A Turing test that includes the latter aspect is a generalized Turing test.
Langton has formulated a test in AL, analogous to the Turing test for AI, that has laid the groundwork for grounding life on theoretical computer science.
MENTAL and Artificial Life
Natural, Artificial and Abstract Life
A distinction must be made between three types of life:
Natural (or real) life is the life of living organisms.
Artificial life is life simulated by a computer, which is limited by physics (electronics). Artificial life can only reflect the complexity of the artificial physical world in which the organism "lives".
Abstract (or formal) life is independent of implementation and is not limited because it belongs to the mental world.
Of these three categories, the most profound is the abstract, followed by the artificial, and finally the natural.
Similarly, one can also speak of natural physics (that of the real world), artificial physics and abstract physics. Artificial physics is that which can be simulated on a computer, it is physics as it is and as it could be. Abstract physics is independent of any implementation.
With respect to computation, three types must be distinguished:
Natural computation refers to the way in which the laws of nature produce modifications in certain systems, which can be interpreted as computational processes. Natural computation includes, among others, computation with DNA molecules and cellular computation with membranes.
Artificial (or physical) computing is computing on a computer, which is dependent on the hardware used and is constrained by the laws of physics governing the hardware. Computers of the von Neumann architecture, although inspired by the Turing machine, are of this type.
Abstract or formal computation, which is implementation-independent because it is of the mental type. The mathematical operations we usually perform (the formal manipulation of symbols) are of this type. The Turing machine, which is a theoretical and abstract device, is also of this type. Paradoxically, computational processes (those that a Turing machine can perform) are called "effective" or "mechanical", even though they have no relation to the physical world.
Ideal computation is a computation in which there is an isomorphism between artificial and abstract computation. When isomorphism is replaced by identity we have computationalism (or computationalist philosophy). This is the case of a hypothetical computer whose primitives are the primitives of MENTAL. In MENTAL abstract physics, abstract mind, abstract computation and abstract life converge.
According to John Wheeler's "it from bit" theory, deep within reality is information (the bit). The known surface world (the it) is a manifestation of information. Computation belongs to the deep level of reality. The universe is a representation, map or superficial representation of a deep process which is computation. The universe is a computer that has been calculating for millions of years something that science is trying to discover.
Archetypes and abstract life
Language truly is the key to everything, as it is to consciousness. Life is the manifestation of consciousness over matter, the result of which is an organized and interrelated unity. Abstract life is based on the archetypes of consciousness.
The abstract archetypes of consciousness are universal and manifest on all planes of reality, including the plane of life. The manifestations of abstract archetypes allow us to generate abstract life in a self-referential, closed form, to generate autonomous forms. Biological functions are manifested archetypes.
By sharing the same archetypes, linguistics unites with biology. Any union must be made from the deep, from the archetypes.
The language of life is the same as for the mind and for nature: MENTAL, because it is based on the same archetypes of the consciousness manifested in these planes.
Regarding the question raised as to whether or not computational processes (which simulate life) are reality, it must be said that the computational is at a deep, abstract level, beyond the physical and mental. When the computational is expressed through the primary archetypes, we are at the level closest to consciousness, that is, at the level closest to life. Primary archetypes manifest at all levels. MENTAL is a model of mind, consciousness and life. MENTAL is a universal computer that manifests at all levels of reality. It is a universal model.
MENTAL, a language for AL
AL brings together many disciplines, but the best way to study it is through the mechanisms common to all of them, which are the primary archetypes. The advantages of this approach are:
It integrates the principle of ascending causality (PCA) and the principle of descending causality (PCD).
MENTAL is a language based on the PCD. But in emergent phenomena (as in the game of life) the PCA appears.
Simplification.
It involves an extraordinary simplification of the theoretical-practical models. It is the same as with AI, where the developments are also greatly simplified. The simplification is due to the power of primary archetypes in general, but particularly to the power of generic expressions and their multiple variants: shared, linked, virtual, interlaced, etc. In the MENTAL model, one moves from simple to complex expressions.
Complexity.
With MENTAL, complexity is explained as recursive application of simple rules, as with the game of life. MENTAL, like the game of life, show that life can have a simple origin.
Simulation.
Models are abstract and can be implemented in a computer whose primitives are precisely the same as those with which the models are built. As MENTAL is the closest formal language to consciousness, it is the best language to simulate the phenomena of living systems.
MENTAL is the demonstration that global or general principles (the primitives) and particular local laws produce something that resembles life.
Abstraction.
Semantic primitives represent the supreme level of abstraction. MENTAL is the maximum possible approach to the subject of AL, in the same way as to AI, consciousness and semantics. The abstraction covers the physical, mental and vital levels.
Principles.
For Ed Fredkin, the same principles apply to the universe as to the living organisms that inhabit it. These common principles are precisely the primary archetypes, the archetypes of consciousness.
CAs.
One need not necessarily rely on local behavioral CAs. There can also be non-local interactions. CAs are an application subdomain of MENTAL, with a special restricted environment and restricted expressions as well. CAs and the game of life are universes to be explored, like MENTAL.
Entities.
Virtual entities can be defined from the main entity. Virtual entities would be like views or derivations of the main entity.
All kinds of imaginary or virtual entities can be defined, only limited by the degrees of freedom. We can easily model all kinds of biological processes.
In MENTAL, virtual and synthetic life are confused, because we synthesize (analogously to chemistry) abstract entities by means of primitives.
Foundation.
MENTAL brings a foundation to AL. It allows to clarify the process of life by contemplating it from a deep level, that of the primary archetypes. As biological phenomena are very complex, we resort to abstraction and generalization to understand them.
MENTAL primitives are the DNA of all that exists. It is the universal blueprint underlying all life forms. MENTAL is the universal genotype, the DNA. The phenotype is the applications.
By its supreme level of abstraction, MENTAL blurs the boundary between AI and AL, as both have the same foundation. MENTAL is transdisciplinary.
Diversity.
The diversity of expressions is produced by combinatorics of the basic DNA (the primitives), as it happens with living beings.
Creativity.
MENTAL is creative. Creativity occurs discontinuously, by leaps and bounds. Evolution is contemplated as the search for creative combinations.
Freedom and consciousness.
Characteristics of life are freedom and consciousness. MENTAL primitives are archetypes of consciousness and degrees of freedom.
The game of life. Analogies with MENTAL
The fact that the game of life generates such a diversity of possible forms indicates that it is something profound, close to an archetype. The deeper an archetype is, the greater are its manifestations. It is for this reason that it aroused so much interest and the reason that it has philosophical connotations. In this sense, it has certain analogies with MENTAL, because of its simplicity, because of the infinity of potential forms it can generate, because it is a computational model, and because it integrates information and environment.
The rules of the game of life are located on the border between the low population and the high population of cells.
Unite opposites. It unites simplicity and complexity. It acts on the border between order and chaos. It unites formal language and visual (geometric) language, since it generates observable qualitative behavior. The cell board itself also unites the opposites of black and white.
It is initiatory. It activates the imagination and consciousness. Many people who see it for the first time are amazed and captivated, wanting to experience it for themselves.
It also has an analogy with the mandala generation program, since simple geometric elements generate an infinite diversity of forms. Although in the case of mandalas the images are static.
MENTAL coding of the game of life
// We assign a value "0" to a dead (white) cell and "1" to a live (black) cell.
// c1 is the two-dimensional array of cells of n×n elements.
// c2 is the matrix at the next instant.
// initial values
(n = 1000) // half-size (horizontal and vertical) of the table of cells
(m = 1000) // number of iterations
(k = [−n…n]°)
[(c1(k k) = 0)] // initially set the matrix to zeros
A criterion that can also be used to know if a stable configuration has been reached is to see if the c1 and c2 boards are the same.
Regarding the issue of the self-reproduction of a machine, this process is very complex if it is based on local, continuous transition rules. On the other hand, if it is based on non-local rules, the solution is discontinuous and very simple. It is sufficient to specify a transformation rule that copies the initial configuration onto another area of the board defined by the vector (dx dy).
Bioforms vs. Mandalas
Dawkins' bioforms have local developmental rules, i.e. the changes are small or gradual changes that occur from one generation to the next. From cumulative local changes can emerge a great variety of complex forms. The same is true of Conway's game of life, which also uses local rules and which can also produce many complex dynamic forms. The fact that such a variety of (superficial) forms are produced indicates that we are also dealing with something close to an archetype (deep).
The program (B) that generates the bioforms has certain analogies with the program (M) that generates the mandalas that illustrate this book:
B uses only straight lines. M uses straight lines, circles and regular polygons. Both evoke geometric archetypes.
B uses branching as a recursive mechanism. M uses branching and other additional mechanisms such as reduction, pursuit, etc.
B drawings have left-right symmetry. Drawings of M have circular symmetry.
Both programs have parameters (which are called genes in the case of B), which are the degrees of freedom, with which potentially infinite shapes can be generated.
B focuses on life forms. M focuses on mandalas, which are manifestations of consciousness.
Both programs can produce emergent forms impossible to foresee a priori (before execution), which can be simple or complex.
Addenda
The foundation of the AL
The AL was officially born as an independent discipline in 1987 at the International Conference on the Synthesis and Simulation of Living Systems, also known as "Articial Life I" or "ALIFE I", held at the Los Alamos National Laboratory, in Santa Fe (New Mexico). It was attended by more than a hundred scientists from different disciplines such as biology, computer science, physics, philosophy and anthropology. The event was supported by the Santa Fe Institute, a private entity dedicated to interdisciplinary research in Complex Systems. LA AL is considered a research area within Complex Systems. The promoter of the meeting was Christopher Langton, who was the first to introduce the term "artificial life".
Langton has investigated CAs in their qualitative aspects by evaluating emergent behaviors. In this aspect, he has studied "Life on the Edge of Chaos" −title of his PhD thesis− in CAs based on the notion of entropy. He found simple (reaching fixed or periodic configurations in a few steps), complex (converging over long periods) and chaotic (producing no definite forms) behaviors. He postulated that evolutionary complex behaviors occur in a narrow range of specific circumstances. He has also investigated the topic of self-replicating machine models.
More about the game of life
Conway initially experimented the game of life with the help of the game of Go, which is a board formed by horizontal and vertical lines on whose intersections are placed white or black chips, corresponding respectively to two players. But soon personal computers were invented and he began to experiment with them. The game of Go also inspired Conway to invent surreal numbers.
The game of life became known in October 1970 in a column by Martin Gardner in Scientific American. The article aroused great interest. With the introduction of personal computers, people began experimenting with various initial forms, and many interesting structures were discovered: the pentomino; indefinitely growing patterns such as guns, which generate gliders; gliders; locomotives (puffers) that move and leave a trail of trash; rakes that move and emit spaceships; exploders; etc.
On May 18, 2010, Andrew J. Wade discovered a self-replicating pattern (which he named "Gemini"). This pattern replicates after 33.6 million generations. In each replication it eliminates the parent.
Many variants of the game of life are known. The standard game is symbolized by S23/B3 (2 or 3 neighboring living cells are necessary for survival and 3 living cells are necessary for a cell to live again). For example, the S16/B6 version indicates that a cell survives if it has 1 to 6 neighbors and needs 6 neighbors to revive. The S12/B1 set generates several approximations to the Sierpinski triangle when applied to a single living cell.
Versions with hexagonal or triangular mesh and with more than two states (which are represented by colors) have also been developed. Today there are hundreds of programs and online versions of the game of life.
Tom Ray and the AL
Tom Ray is a biologist who has been searching for the secret of the evolution of life in the real world. He first sought it in Costa Rica, studying butterflies and ants. But he found this research quite frustrating, as he wanted to be able to observe its effects on thousands of generations of organisms. He finally found what he was looking for in MIT's AI department. Ray's discovery was that programs can act like living organisms: interacting, self-replicating, undergoing random mutations, and passing code to their offspring. Ray learned genetic programming, and one day his first "digital creatures" were born. The space where these virtual creatures lived he called "Earth" (in Spanish), a computer simulation of the evolution of life, with mutations, inheritance and natural selection.
Turing and the AL
Alan Turing −who conceived an abstract device that we know today as the "universal Turing machine" capable of processing any algorithm, and which is the foundation of computers − was also a piner in what we today call "artificial life". He died when he was working on the computer simulation of biological development processes.
Bibliography
Álvarez López, José. Bioinformática. Bases para una nueva biología. Espacio y Tiempo, 1992.
Attwood, Teresa; Parry-Smith, David. Introducción a la Bioinformática. Pearson Alhambra, 2002.
Bohm, David; Peat, F. David. Ciencia, orden y creatividad : las raíces creativas de la ciencia y la vida. Kairós, 2010.
Emmeche, Claus. Vida simulada en el ordenador. La nueva ciencia de la vida artificial. Gedisa, 1998.
Fernández, Julio. Vida artificial. Ediciones de la Universidad Complutense de Madrid, 1992.
Jenkins, Lyle. Biolingüística. Cambridge University Press, 2002.
Langton, Christopher (ed.). Artificial Life. SFI Studies in the Sciences of Complexity. Addison-Wesley, 1988.
Lahoz Beltrá, Rafael. Bioinformática. Simulación, vida artificial e inteligencia artificial. Díaz de Santos, 2004.
Levy, Steven. Artificial Life. A report from the frontier where computers meet biology. Vintage, 1992.
Prata, Stephen. Vida artificial. Anaya Multimedia-Anaya Interactiva, 1994.
Ramos Salavert, Isidro. Vida artificial. Universidad de Castilla-La Mancha, Servicio de Publicaciones, 1995.
Rayo, Agustín. El juego de la vida. Investigación y Ciencia, Diciembre 2010.
Smith, Justin E.H. Divine Machines: Leibniz and the Sciences of Life. Princeton University Press, 2011.
Solé, Ricard. Vidas sintéticas. Una aproximación revolucionaria a la ciencia, la historia y la mente. Tusquets Editores, 2012.
Stewart, Ian. Las matemáticas de la vida. Crítica, 2011.
Stewart, Ian. El segundo secreto de la vida. Crítica, 2006.
Sullins III, John P. Gödel incompleteness theorems and artificial life. Internet.
Thompson, D’Arcy. Sobre el crecimiento y la forma. Akal, 2011.
Von Neumann, John. Theory of self-reproducing automata. University of Illinois, 1996.