MENTAL, a Simple Language Foundation of Complexity
MENTAL, a Simple Language Foundation of Complexity
MENTAL, A SIMPLE LANGUAGE FOUNDATION OF COMPLEXITY
"Science is really the search for simplicity" (Claude A. Villee).
"In the deepest depth lies the simplest" (Ken Wilber).
"Everything is very difficult before it is simple" (Thomas Fuller).
Complexity vs. Simplicity
Complexity and simplicity are qualitatively opposite concepts that are connected:
Both concepts are ambiguous, which need to be defined and formalized, and the relationship between the two needs to be clarified.
Complexity is linked to the accidental, to the superficial, to the unimportant. Simplicity is linked to the important, the essential, the significant. "Simplicity consists of eliminating the obvious and adding the significant" (John Maeda).
Complexity and simplicity are concepts that can be misleading. What is apparently complex may actually be simple or have a simple origin.
An apparently very complex fractal structure, such as the Mandelbrot fractal, has a very simple generating law.
The spread of a disease may have a simple origin, but it can produce something complex and difficult to manage.
And vice versa: what seems simple can actually be complex. For example:
A cell phone can be very easy to use, but hides great complexity.
The Earth seen from outer space may look simple, but it is actually complex.
Children learn to speak with great ease, a task that may seem simple but is enormously complex.
Simplicity and complexity are paradoxical concepts. Indeed, achieving simplicity is a very complex task. But once installed in simplicity, the complex becomes simple.
"Simplicity is complex" (John Maeda).
"There is nothing as complex as simplicity" (Kai Krause).
"The best thing is always the simplest thing, the bad thing is that to be simple you need to think a lot" (John Steinbeck).
"The simple can be more difficult than the complex. You have to work hard to get your mind uncluttered and make things simple. But it's worth it in the end because once you get it right you can move mountains" (Steve Jobs).
"Ironically, even the simplest things can be hard to understand because they are abstract" (John Baez).
"Simplicity is the ultimate sophistication" (Leonardo da Vinci).
Another aspect of the simplicity-complexity paradox manifests itself in the relationship between the particular and the general or universal: studying or modeling the particular is more complex than studying or modeling the universal because the universal is always simple.
Characteristics of complexity
Complexity is presented as confusing, irrational, contradictory, messy, ambiguous, entangled, uncertain, unsettling, irreducible, imperfect, and difficult or impossible to understand. There are many systems that can be considered complex, such as the brain, our organism and the universe.
Today, the term "complex" refers to a system in which everything is interrelated, like a web of fine threads. In fact, the term "complex" comes from "complexus," meaning "that which is woven together.
Complexity implies a recognition of the limitations of our mind to comprehend a system in its full breadth. These limitations are linked to Cartesian-inspired thinking: rational, analytical, particular, separative, reductionist, quantitative. To overcome these limitations, another type of thinking is necessary: intuitive, systemic, holistic, global, unifying, generalizing, qualitative. These two modes of thought or consciousness must be integrated and complement each other.
The complexity of a system is associated with the diversity of constituent elements and their relationships and with our ignorance about its essence, principles or foundations. When we are confronted with something we consider complex it is because we are not able to discriminate the essential, fundamental constituents and relationships.
Characteristics of simplicity
Rationality.
The simple is presented to us as clear, rational, understandable, orderly, harmonious, beautiful, certain and perfect. "Simplicity = Sanity" (John Maeda).
Foundation.
Everything is based on simplicity. Complexity can only be approached from simplicity.
Truth.
Simplicity brings us closer or leads us to truth, because truth is simple.
"You can recognize truth by its beauty and simplicity" (Richard Feynman).
"Simplex sigillum veri" (Simplicity is the seal of truth). Physics auditorium of the University of Götinga.
Value.
The simple has more value than the complex.
"The simpler a theory is, the greater its value" (Bertrand Russell).
Consciousness.
We must learn to perceive the simple that hides behind the complexity. This leads to greater awareness, for the same simple concepts are hidden in all things. Simplicity implies awareness, power, creativity, and a new way of contemplating the world.
Universality.
The simpler a thing is, the more general or universal it is.
Mind-nature union.
The need for simplicity is not just to economize, but to go deeper into the nature of reality: where mind and nature meet.
Beauty.
In simplicity there is beauty because there is harmony and order.
"What is beautiful is simple" (René Mey).
"Beauty resides in simplicity" (Einstein).
Creativity.
Simplicity is the great engine of innovation, creativity and the deep connection between all things.
"Simplification facilitates discovery" (Gödel).
The challenge of complexity
The challenge of complexity is to establish a new, more general or universal paradigm:
That is founded on a simple general or universal language.
That it be a language of consciousness.
That it integrates ontology and epistemology.
That integrates simplicity and complexity.
That contemplates the problem of knowledge of knowledge.
That integrates the computational and the descriptive.
That approaches or leads us to the truth.
That it contemplates all kinds of relationships between its elements, not only causal ones.
That it be of fractal type, that is to say that the same principles, elements or resources are applied at all levels.
That it be a humanistic language.
That it be the foundation of a universal science that goes beyond the classical disciplines of knowledge organization and has a transdisciplinary approach.
That contemplates all types of systems, including the human being, society, mind and nature.
That allows formalizing complexity.
In recent years, attempts have been made to formalize the ambiguous, fuzzy or psychological concept of complexity and at the same time to overcome the limitations of conventional (or traditional) disciplines. Thus, the so-called "complexity sciences" have emerged, all based on a systemic approach, which allow the creation of models of complex systems: general systems theory, information theory, cybernetics, self-organization theory, chaos theory, fractal theory, etc.
Computational and Descriptive Complexity
Computational Complexity
The computational complexity of a mathematical entity is defined as "the length of the shortest program that generates it". It is also called "algorithmic complexity" or "Kolmogorov complexity" and is a measure associated with the degree of difficulty of specifying a mathematical entity by means of a program, algorithm or operational (constructive, computational) expression that generates that entity.
This definition makes no reference to:
The processing time. The algorithm is supposed to run in a finite number of steps.
The memory space occupied by the program (static space) and the space it needs during execution (dynamic space). The computational environment (theoretical or real) is assumed to be finite.
The possibility that the algorithm is dynamic, i.e. self-modifying during execution.
To illustrate the concept of algorithmic complexity, let us look at the structure of the sequences:
If a sequence x has a certain pattern of formation of its elements, it is possible to generate it by an expression shorter than the sequence itself, i.e., it can be represented in a compressed form. Its algorithmic complexity K(x) is.
K(x) x)
If a sequence x is random, the expression that generates it is the sequence itself, it is not possible to compress it and its algorithmic complexity is maximum (with respect to all sequences of the same length). In this case,
K(x) = Length(x)
It is shown mathematically that most sequences of a given length are random and therefore incompressible. Kolmogorov proved in the 1960s that there are infinitely long sequences that cannot be compressed and had all the properties of random sequences.
The simplest pattern of a sequence corresponds to the case where all its elements are the same. In this case, maximum compression can be achieved and complexity is minimal (with respect to all sequences of the same length).
Natural numbers −a special type of sequences− are mostly random. Only an infinitesimal subset of them are compressible.
Algorithmic complexity depends on two factors:
The language used.
There is an invariance theorem that states that a computational language can emulate any other of the same type, i.e., all languages are computationally equivalent. Therefore, any language can be used, but the lengths of expressions in different languages will differ by at most a certain constant.
The generation algorithm used.
It is impossible to ensure that the algorithm used is the shortest possible. Only an upper limit can be set. Since alternative shorter (not known) algorithms could exist, there is what is called "algorithmic uncertainty" and therefore the algorithmic complexity is incomputable, i.e., undecidable.
In addition to algorithmic uncertainty (when at least one algorithm is available), uncertainty exists when we are unable to find any algorithm (e.g., for a sequence) and thus do not know whether or not that sequence is random. Chaitin's theorem proves that there exist undecidable sequences, in the sense that it is impossible to know whether or not they are random. This theorem is analogous to Gödel's theorem concerning formal axiomatic systems, in which it is proved that there exist undecidable sentences.
The concept of algorithmic complexity was first raised by Ray Solomonoff, defined by Andrei Kolmogorov and later extended by Gregory Chaitin, all during the 1960s.
Descriptive complexity
It is the measure associated with the difficulty of describing a mathematical entity. The descriptive complexity depends in this case on three factors:
On the descriptive language used.
Traditionally, the formal language of mathematical logic has been used.
Of the descriptive expression used.
There may be several valid alternative forms. Again, there is uncertainty, but of a descriptive type.
Of the resolution, i.e., of the degree of detail of the description.
The topic of descriptive complexity starts in 1974, when Ron Fagin [1974] shows that the complexity of the NP class of problems is the same as the class of problems described in second-order existential logic [see Addendum]. This means that the computational complexity of an NP-problem can be understood as a function of the complexity of its logical description.
According to Fagin, there is the following correspondence between measures of descriptive complexity and the resources associated with algorithmic complexity:
The depth level of quantifiers corresponds to the computation time in a parallel computing environment.
The number of variables in the descriptive expression corresponds to the amount of hardware required (memory and processors).
Complexity according to Wolfram
A "new kind of science"
Stephen Wolfram published in 2002 "A New Kind of Science" (NKS), a voluminous book (1197 pages), the result of many years of research in the field of computation, in which he claims to have invented/discovered a universal scientific paradigm for modeling systems and for understanding the universe itself. Wolfram is the creator of "Mathematica", a mathematical calculation and graphing software, with which he did all his research. These focused on cellular automata (CAs) because he discovered that a set of simple, local rules, applied recursively, could generate forms or patterns of great complexity. This is explained by the fact that, although the rules are local, their effects (when the rules are applied recursively) spread throughout the system, affecting it globally.
A cellular automaton (CA) is a discrete computing system. It consists of a grid of one or more dimensions, made up of cells, where each cell has a state within a finite set of states (e.g., black or white), and a finite set of local rules that establish the evolution of the system in (discrete) time, such that the state of each cell at time t+1 is a function of the state at time t of the neighboring cells. Each time the rules are applied, a new generation is obtained, thus obtaining a simple dynamic model that can produce complex results.
The publication of NKS provoked some controversy. For some it was no breakthrough, but a (somewhat exhaustive) presentation of things already known. For others, these ideas even threaten the very notion of science.
Wolfram's conclusions, after many years of research, can be summarized as follows:
All processes, natural (those occurring spontaneously in nature) and artificial (those produced by human effort) can be considered computations. Everything is computation in nature. And every process can be represented by a model of CAs.
The universe as a system can be considered as a computer.
Behind every natural process there is a computer program (algorithm) responsible for that process. Therefore, there must also be a program that governs the entire universe.
The "computational world", the abstract world constituted by the set of all possible programs that can be written, must be systematically explored. Although this set is infinite, it can be investigated starting from the simplest. Wolfram initially chose one-dimensional CAs, with two states (black and white) and rules based on this state and that of adjacent cells.
This approach is the opposite of the usual approach of looking for theoretical models of natural phenomena. For Wolfram, exploring the abstract computational world is of great importance for understanding the natural world, because it brings new ideas and functionalities. You have to experiment with the computational world and then confront the results with the natural world.
Discrete computations make it possible to capture all physical phenomena. Equations are not needed, as in conventional science, which uses models represented by equations with continuous variables and which have no direct relation to nature, since the universe is discrete. Moreover, equations do not work with complex phenomena, so they cannot model nature. Traditional mathematics cannot represent complexity, but complex systems can be algorithmized.
The universe is best explained on the basis of simple computer programs made up of simple rules applied recursively. For Wolfram, nature is written in the language of computer programs, correcting Galileo, who claimed that nature is written in the language of mathematics.
Nature uses the simplest possible algorithms (principle of computational economy). The simpler an algorithm is, the more likely it is that nature will use it. For example, there are infinitely many possible forms of shells, but in nature there are only half a dozen. Natural selection, in this case, is the selection of simplicity. The simpler a system is, the more likely it is to appear in complex contexts. Nature uses simple knowledge, of maximum computational economy and reusability.
In essence, deep down, everything is simple. Complexity is only apparent, on the surface. All the complexity of the universe emanates from a series of simple rules. The complexity of systems is due only to the large number of simple components interacting simultaneously.
The universe is a gigantic CA. The universe is "running" a single algorithm, which must necessarily be simple. The "ultimate theory" would be to know the code (the rules) of this algorithm.
There are only four classes of behavior in CAs:
Class 1 are repetitive, homogeneous, uninteresting patterns.
Class 2 are stable or periodic patterns.
Class 3 are well-defined, but chaotically organized patterns.
Class 4 are patterns that are neither regular nor random. Some order appears, though unpredictable. And it appears intelligent. Rule 110 is the paradigm of this behavior.
Class 3 and 4 systems are universal and therefore computationally equivalent. The CA that governs the universe is class 4.
All processes have the same fundamental constraints arising from the available resources. The entire universe is continually computing its future and does so at the rate it can because of the limitations of the basic laws of nature.
The CA paradigm is a universal model of computation, like the Turing machine, i.e. it allows to execute any algorithm. Specifically, of all the rules explored by Wolfram, rule 110 corresponds to the simplest CA and is the foundation of universal computation. The universality of rule 110 was first conjectured by Wolfram and later proved by his assistant Matthew Cook [2004].
The Principle of Computational Equivalence (PCE)
It is Wolfram's most important result or conclusion: "There are several ways of stating the PCE, but probably the most general is to say that almost all processes that are not obviously simple can be regarded as computations of equivalent sophistication." According to Wolfram, there is no fundamental difference between a 30-rule CA and a human mind, or between a hurricane and the entire universe. They are all computations of equivalent complexity. For Wolfram, the PCE is a law of nature, comparable in importance to the law of gravity, since it is shared by all processes in the universe.
The definition of PCE is somewhat cryptic, it can be interpreted as follows:
Computationally, almost all processes (natural and artificial) are equivalent. There is no computational difference between a computer, the flight of a bird, the rise of the stock market, the expansion of the universe, biological evolution and the human being itself (including the human mind).
Almost all processes have the same level of complexity. There are no levels of complexity. There is only one level of complexity beyond the trivial.
Once complexity is reached, it cannot be exceeded. The upper limit of complexity is relatively low. "Every system reaches a maximum level of complexity, which can be determined by the computational effort required to produce the final result."
The resources required to perform complex computations are potentially the same.
The principle of computational irreducibility
This principle has two aspects:
Given a model consisting of a set of previously known rules, it may take an irreducible amount of computational work to discover the consequence of the model. That is, the final result cannot be predicted, being necessary to go through all the intermediate states.
Some models have computational reducibility (the result can be predicted), but the most interesting models are those with computational irreducibility, since they make a richer and more creative use of resources.
You cannot reverse engineer. That is, given the data, it is not possible to deduce or find the corresponding model (the set of rules).
Critique of NKS
The emergence of complexity from simple rules is not unique to CAs. It happens with fractals, in chaos theory, in self-organizing systems, etc.
The algorithms used are very limited. They are based exclusively on CAs with particular rules. They are not sufficiently generic, since they do not contemplate other information structures. Nor do they contemplate evolutionary algorithms, i.e. dynamic rules (rules that can change during the process).
All Wolfram's deductions are based on interpretations or intuitions, not formal proofs.
He does not explicitly speak of meta-rules, although he presupposes that there is a unifying rule in the universe: a rule at the heart of everything and which generates all other rules in the universe. Wolfram is a long way from discovering the (supposedly simple) laws of the universe.
Regarding the QEP, to claim that there are no levels of complexity is to ignore the definition of computational complexity, well established and substantiated by Kolmogorov, Solomonoff and Chaitin: "The complexity of an expression is the length of the algorithm capable of generating it." Therefore, there is no upper bound on the complexity, it is infinite.
Computational equivalence should be understood in the sense that the same type of resources are always used to carry out the computations. Or, more generally, perhaps Wolfram intuited, when he formulated this principle, that behind all phenomena, of whatever kind, lie the same universal primitives. In this sense, all phenomena are equivalent because they are manifestations of the same primitives. Then, the PCE as a law of nature must be replaced by the universal semantic primitives, but not as laws but as the essential principles of mind and nature.
Complexity in MENTAL
Simplicity Theory
Complexity theory should really be called "simplicity theory," for the following reasons:
Complexity is founded on simplicity, for complexity is the consequence, the manifestation of simplicity recursively applied. Here the agreement with Wolfram is total.
Simplicity and complexity are the two poles of reality and are closely related. The simple is the internal, the profound. The complex is the external, the superficial. Complexity is only apparent, since it appears or manifests itself only in the superficial.
Discovering that simplicity and complexity are connected expands our awareness and our conception of reality.
When an expression is compressed, complexity (which is an external aspect) is reduced and our understanding is increased. Chaitin says: "To understand is to compress". We should add that "to understand is to see the simple in the complex" or "to understand is to simplify". Maximum simplicity leads to power and wisdom, where everything is connected.
MENTAL, a simple language foundation of complexity
MENTAL meets the criteria of the complexity challenge:
It is a simple general or universal language. It is simple because it consists of only 12 universal semantic primitives (and their opposites or duals), where the lexical semantics is equal to the structural semantics.
It is a language of consciousness. Consciousness comes with the integration of opposites.
It integrates ontology and epistemology.
Integrates simplicity and complexity.
MENTAL is a theory (and practice) of simplicity. MENTAL unites the simple and the complex. It is the paradigm of simplicity: a few simple principles lead by combinatorics to the variety and complexity of the world. MENTAL clarifies definitively the relationship between simplicity and complexity.
It integrates the computational and the descriptive.
Solves the problem of knowledge of knowledge.
MENTAL solves the problem of knowledge knowledge, because the same primary archetypes are present in knowledge and meta-knowledge. Total understanding can only be achieved from the simplicity of the primary archetypes.
It brings us closer or leads us to the truth. Truth resides in the primary archetypes, the foundation of all possible worlds.
It contemplates all kinds of relationships between its elements.
MENTAL allows to interrelate everything with everything, that is to say, to establish relationships of all kinds, and not only causal ones.
It is a fractal language.
The same primitives apply at all levels."
"Self-similarity is the simplest way to build complexity" (Jorge Wagensberg).
It is a humanistic language. It integrates science and philosophy. It is a philosophical language because its primitives are philosophical categories.
It is the foundation of a universal science and has a transdisciplinary approach.
It contemplates all kinds of systems, including the human being, society, mind and nature. MENTAL is a model of mind and nature.
Allow to formalize complexity.
It is described in the following section.
Unified definition of complexity
Since MENTAL is an operational and descriptive language, we can unify the two aspects of complexity theory (computational complexity and descriptive complexity) by the following definition: the complexity of an entity or mathematical expression is the length of the shortest expression that generates or describes it. Examples (the last column is the complexity):
Type
Entity
Expression
Complexity
Atom
a
a
1
Repeated element sequences
aaaaaaaaaaaaa
( a★9 )
7
(abc abc abc abc)
( abc★3 )
9
Infinite sequences of equal elements
aaaa...
( a★ )
6
(abc abc ...)
( abc★ )
8
Predefined
Null expression
θ
1
Universal disjunctive expression
α
1
Universal conjunctive expression
Ω
1
Empty sequence
()
2
Empty set
{}
2
Distributions
(au av aw)
(a[u v w])
16
(aub avb awb)
(a[u v w]b])
11
Finite discrete ranges
Sequence of natural numbers between 1 and 100
( 1…100 )
9
Sequence between n and n+10
( n…(n+10) )
12
(1 3 5 7 9 11)
(1 3 … 11)
10
Infinite discrete ranges
Natural numbers
{ 1… }
6
Odd natural numbers
{1 3 …}
7
Continuous range
Real numbers between 0 and 1
{〈 r ← r≤0 ← r≤1〉}
16
Function
Function of two variables
〈(f(xy) = (x+yx*y))〉
22
Remarks:
In general, the complexity of an infinite sequence is less than that of finite sequences.
Blanks within an expression count for the complexity calculation. Several blanks in a row count as one.
If a name has been previously assigned to something that may be complex, its description is trivially short and its measure of complexity may be minimal. This is the case for θ, α and Ω, which have complexity 1.
Complexity can vary from 1 to infinity. There is no upper limit to the complexity.
To the complexity of an expression using derivatives, the length of the code corresponding to those derivatives should be added. This is the case, for example, of repetition, range and distribution.
Addenda
Distinction between information and computational complexity
Shannon presented in 1948 the modern theory of information. He defined the information I associated with a probability event P as.
I = −log2P bits
binary digit− is the unit of information and corresponds to the information associated with an event of probability P =1/2. The value associated with a bit is represented as 0 or 1.
If we have a sequence x of n bits and each element (0 and 1) has the same probability (1/2), the information associated to a specific string x is
I(x) = −log2(1/2)n = n
That is, the information is the length of the sequence.
If the sequence x is random, its algorithmic complexity matches the information:
K(x) = I(x) = n
Shannon also introduced the concept of entropy (H) of information, in order to measure the degree of randomness (or disorder) of a string of equiprobable bits:
H = −∑Fi.log2Fi
(summation in i=1 between 1 and m)
where Fi is the relative frequency of occurrence of the event i, and m is the number of events.
Examples:
Sequence
F0
F1
H
00100010
6/8
2/8
0.81
01010101
4/8
4/8
1
00000000
8/8
0/8
0
It follows that Shannon's formula is not suitable for quantifying randomness, since it is intuitively seen that the first chain is more random than the second, and yet its entropy is lower. The entropy formula, as follows from its definition, makes more sense when applied to bags (sets with repeated elements and where order is not taken into account).
The concept of algorithmic complexity is more generic than that of information because it can be applied to any mathematical entity, without considering probabilities or frequencies.
Second-order existential logic
First-order logic is logic that uses predicates or relationships between elements. The formulas are of the form P(x1,.... ,xn), where P is the predicate and x1,...,xn are the elements. Variable elements can be quantified (with the universal quantifier ∀ or the existential ∃), but predicates cannot be quantified.
Second-order logic is first-order logic, along with the possibility of quantifying predicates and specifying them by variables.
Second-order existential logic is second-order logic in which the variables of the predicates are quantified with the existential quantifier ∃.
Problem complexity classes
There are several classes of problem complexity, depending on the type of algorithm used to solve it, as well as the space (memory) and time resources required:
Class
Algorithm
Space
Time
L
Deterministic
Logarithmic
-
NL
Nondeterministic
Logarithmic
-
P
Deterministic
-
Polynomial
NP
Nondeterministic
-
Polynomial
PSPACE
Deterministic
Polynomial
-
Since Fagin's theorem refers to the NP class, let us explain the differences between the P and NP classes:
The class P problems are those that have an efficient algorithmic (deterministic) solution, in polynomial time.
The NP class of problems are those that do not have an efficient deterministic algorithmic solution, but do have one with a non-deterministic algorithm, in polynomial time.
A nondeterministic algorithm is one that does not run according to a fixed scheme, but varies according to decisions made during its execution. An example of an NP problem is the famous traveling salesman problem, where there are nondeterministic algorithms that solve it efficiently.
Every P problem is in NP (P ⊂ NP), but the converse (NP ⊂ P) has not been proved. The doubt remains as to whether P = NP.
Cellular automata vs. Fractals
There are several parallels and analogies between CAs and fractals:
Both are considered new scientific paradigms, with universal claims.
Both claim to be able to model and emulate different natural forms and processes.
Both claim that they can be applied to many domains: architecture, linguistics, economics, music, cognitive science, social science, physics, etc.
Both use a set of simple rules, applied recursively, to generate complexity. In fact, fractal shapes are also generated with CAs.
Both are tools for creativity.
CAs constitute a universal paradigm, having been applied, like fractals, to a diversity of areas, especially for the generation of patterns in nature: shapes of seashells, crystals and snowflakes, pigmentation in animals, plant growth, phyllotaxis (arrangement of leaves on plant stems), formation of galaxies, etc. And also for robotics, fluid flow simulation, bioengineering, cryptography, self-organizing systems, nanotechnology, materials fracturing, chemistry (molecule formation), architecture, art, music, etc.
Origin of cellular automata
CAs were conceived in the 1940s by Konrad Zuse and Stanislaw Ulam.
Konrad Zuse published a paper in 1967 and later [1969] a book (Calculating Space) in which he claimed that the universe is the result of a discrete computational process running on a giant cellular automaton of deterministic type.
Ulam's contributions came in the late 1940s, shortly after he had invented (along with Nicholas Metropolis and John von Neumann) the Monte Carlo method, a statistical (non-deterministic) method used to approximate complex mathematical expressions.
John von Neumann, also in the 1940s, applied CAs (at the suggestion of Stanislaw Ulam) in the study of complex systems, specifically in the design of self-reproducing automata (automata that reproduce themselves) and reflected in his book "Theory of Self-reproducing Automata".
The Game of Life, by Conway
The game of life, created by John Conway, is a two-dimensional CA with two states (black or white) for each cell, which represents the paradigm of dynamic complexity from the simple: three very simple rules produce extremely complex dynamic results. The rules are:
If a black cell has 2 or 3 black neighbors, it remains black.
If a white cell has 3 black neighbors, it becomes black.
In all other cases, the cell remains white (if white) or becomes white (if black).
Simplexity
Simplexity is a new theory that proposes a dual or complementary relationship between simplicity and complexity. Anuraj Gambhir, a visionary and innovator in the field of mobile techommunications, is credited with the creation of this term, which merges the terms simplicity and complexity. But it was the book "Simplicity: Why Simple Things End Up Complex and How Complex Things Can Be Simple" by Jeffrey Kluger [2009] that has perhaps contributed most to the popularization of this term.
In his book, Kluger analyzes technological and social systems. He also studies human behavior and the way we perceive things and events. He draws parallels between apparently disparate systems according to their degree of simplicity. He proposes a new theory of how things really work, a theory that he applies to a wide variety of subjects. He even claims that understanding the relationship between simplicity and complexity can improve people's lives. But missing from the book is how to make things, mainly technological products, simpler.
Plectics
Murray Gell-Mann −Nobel laureate in physics in 1969 for the development of the model of quarks, the component particles of nucleons (the protons and neutrons)− coined the term "plectics" to refer to an interdisciplinary domain of research in which aspects of simplicity and complexity are involved. More abbreviated, plectics is the science that studies simplicity and complexity, and their interaction with each other, in all kinds of systems.
Gell-Mann's inspiration for creating and naming this science was based on the difference between the simple laws governing the behavior of matter at the quantum level and the complexity resulting from the process of evolution. At the quantum level, at the deep level, particles have uniformity, they have no individuality, they are interchangeable. On the other hand, at the surface level there is diversity and individuality. In his work "The quark and the jaguar" [1995], the quark symbolizes the simple and the jaguar the complex, and he wonders how the simple can create the complex.
Gell-Mann chose the word "plectic" because it refers to both the simple and the complex. He recommends this term instead of "systems science" or "systems theory" because "system" is too generic a term.
In Latin, "plexus" means braided, intertwined or interwoven, and is a term from which "complex" is derived.
In Latin, "simplex" means "simple" and literally means "folded once".
Uses the Greek suffix "ics" (ica) as in mathematics, computer science, ethics, politics, etc.
In Greek, "plektos" means "braided together".
In Greek, "symplektikos" means "interlocking" or "uniting", and is the origin of the mathematical term "symplectic", a term first used in 1939 by Hermann Weyl.
Gell-Mann was Distinguished Fellow at the Santa Fe Institute (New Mexico), an institute for interdisciplinary studies (which he co-founded), dedicated to the study of the complexity of natural, artificial and social systems. In particular, he studies complex adaptive systems, systems that are capable of learning, adapting, self-organizing, evolving and producing emergent behaviors; nonlinear dynamical systems, including chaos theory; biological evolution; ecosystem development; learning; the evolution of human languages; the rise and fall of cultures; the behavior of markets; etc.
Gell-Mann declared himself opposed to pure reductionism. According to him, reductionism is not the way to do science, although he recognized that the search for simplicity is a useful criterion or method in the search for the fundamental laws of science. In this sense he opposed the reductionist ideas of scientists such as Steven Weinberg and Richard Feynman.
Edgar Morin's Complex Thought
Edgar Morin is the creator of the concept of "complex thinking" [Morin, 1990], a type of thinking necessary to deal with the complexity of reality and which has the following characteristics:
Complexity responds to the principle of unity in diversity.
To understand complexity we must also know simplicity. "We cannot attempt to enter into the problematic of complexity if we do not enter into that of simplicity."
Complexity can only be understood through the part-whole relationship.
A dialectical logic is needed to harmonize the contradictions between order and disorder, between producer and product, between open and closed system, etc. "The deep level of reality ceases to obey classical or Aristotelian logic."
Knowledge is a spiral adventure, without end, and whose starting point is historical. We will never be able to reach the truth and total knowledge, but successive approximations can be made to approach the invisible reality. "Every explanation, every intellection will never be able to find an ultimate principle."
There is only one science of the general.
The cognitive resources common to the different disciplines, to achieve the unity of knowledge, already exist and are found in three disciplines: general systems theory, cybernetics and information theory.
The problem of the epistemology of complexity is the same as that of the knowledge of knowledge.
Metaphors must be used. Metaphors establish an analogical communication between different realities that allow to overcome the isolation of things.
The problem of complexity is twofold: increasing organization (the biological, of negative entropy) and increasing disorganization (the physical, of positive entropy).
There are two types of complexity: restricted complexity (the complexity of a given system) and complexity of a general, humanistic type.
A multidisciplinary approach must be used to approach the truth. Beyond the scattered and diverse, deep down, there are laws.
Science and philosophy must be integrated.
The originality of life lies in its organizational complexity. Organization by recursion, by self-production.
Morin sets out 7 principles for the foundation of complex thinking:
Systemic or organizational thinking.
It is based on the part-whole relationship, following Pascal's philosophy: "I think it is impossible to conceive of the parts without knowledge of the whole".
Hologrammatic thinking.
The whole is inscribed in the parts and the whole is inscribed in the parts. There is a continuous dynamic between the whole and the parts. This type of thinking is different from holistic thinking, because holistic thinking is reductionist in that it only sees the whole.
Feedback thinking.
Cause acts upon effect and effect acts upon cause.
Recursive loop thinking in self-production and self-organization.
The key systemic concept is that of feedback loops or circular causality. There are retroactive loops (as in cybernetics), to define self-regulating processes, and recursive loops: products and effects are producers and causers of what they produce.
Autonomy thinking that contemplates both open and closed systems.
Dialogic thinking.
To understand antagonistic notions. It unites opposites, which complement and coexist: difference and unity, autonomy and dependence, knower and knowledge, part and whole, order and disorder, system and environment (or ecosystem), etc. "It is necessary to religify what was considered as separate."
The thought of reintroduction of the one who knows in the known.
All knowledge is a reconstruction or translation. "Our worldviews are translations of the world."
The 10 laws of simplicity, by John Maeda
John Maeda −graphic designer, visual artist and computer technologist− proposes a humanistic approach to design in general, and technology in particular, through his 10 laws of simplicity [Maeda, 1966]:
Reduce. Reduce by hiding the attributes and functionalities of a product, so that only those needed by the consumer or user appear. Quality should be built into a product, but not directly visible.
Organize. An organized complex system makes it look simpler.
Time. Saving time makes things seem simpler.
Knowledge. Knowledge simplifies everything.
Difference. Simplicity and complexity need each other. Simplicity is only perceived as a contrast to complexity. For example, a one-button device surprises and is seen as simple because more buttons are expected.
Context. What lies on the periphery of simplicity is not peripheral at all, but very relevant.
Emotion. A product must excite.
Trust. You have to trust that simplicity is the safe way to design and communication.
Failure. Sometimes failure is necessary to achieve the ultimate goal of simplicity.
The only one. Simplicity consists of eliminating the obvious and adding the important and meaningful. According to Maeda, this law is a summary of the previous laws and is a simplification of the laws of simplicity. This law is based on 3 keys:
Distance. From the outside, from far away, everything seems simpler. It allows us to see the forest (the simple) and not the trees (the complex).
Openness. What is open inspires confidence and makes it appear simpler.
Energy. Energy must be saved by using as few resources as possible.
Maeda describes in the book his struggle to understand the meaning of existence as a humanistic technologist. The book is written with the same principles he postulates, and is a guide to building a simpler world.
Bibliography
Badii, Remo; Politi, Antonio. Complexity. Hierarchical Structures and Scaling in Physics. Cambridge University Press, 1999.
Berthoz, Alain. Simplexity. Simplifying Principles for a Complex World. Yale University Press, 2012.
Chaitin, Gregory. Information, Randomness, and Incompleteness, 2nd ed. Singapore: World Scientific, 1990.
Chaitin, Gregory. Meta Math! The Quest for Omega. Pantheon Books, New York, 2005.
Deutsch, David. La estructura de la realidad. Anagrama, Barcelona, 1999.
Fagin, Ron. Generalized First-Order Spectra and Polynomial-Time Recognizable Sets. En Complexity of Computation (ed. R. Karp), SIAM-AMS Proc. 7 (1974), 27-41.
Fredkin, Edward. Digital Philosophy. www.digitalphilosophy.org
Gell-Mann, Murray. El quark y el jaguar. Aventuras en lo simple y lo complicado. Tusquets editores, 1995.
Gell-Mann, Murray. Let’s Call It Plectics. Complexity, vol. 1, no. 5, 1995/96. Disponible en Internet.
Holland, John H. Hidden Order: How Adaptation Builds Complexity. Addison Welley, 1996.
Immerman, Neil. Descriptive Complexity. Springer-Verlag, New York, 1999.
Kauffman, Stuart. At Home in the Universe: The Search for Laws of Complexity. Harmondsworth: Pengui, 1996.
Kolmogorov, Andrei. Three Approaches to the Quantitative Definition of Information. Problems in Information Transmission 1 (1965), 3-11.
Kluger, Jeffrey. Simplejidad. Por qué las cosas simples acaban siendo complejas y cómo las cosas complejas pueden ser simples. Ariel, 2009.
Kurzweil, Ray. Reflections on Stephen Wolfram´s A New Kind of Science. Internet.
Li, Ming y Vitányi, Paul. An Introduction to Kolmogorov Complexity and Its Applications. Springer-Verlag, New York, 2nd Edition, 1997.
Maeda, John. Las leyes de la simplicidad. Gedisa, 2004.
Margolus, Norman H. Physics and Computation. Ph. D. Thesis, MIT, 1998. Internet.
Morin, Edgar. Introducción al pensamiento complejo. Gedisa, 1990.
Papadimitriou, Christos H. Computational Complexity. Addison Wesley, 1993.
Poundstone, William. The Recursive Universe: Cosmic Complexity and the Limits of Scientific Knowledge. Contemporary Books, 1985. (libro popular sobre autómatas celulares y complejidad).
Sánchez Fernández, Carlos y Valdés Castro, Concepción. Kolmogorov. El zar del azar. Nivola, 2003.
Seife, Charles. Decoding the Universe: How the New Science of Information is Explaining Everything in the Cosmos, from Our Brains to Black Holes. Viking Adult, 2006.
Schmidhuber, Jürgen. A 35-year-old Kind of Science. Origin of main ideas in Wolfram´s book “A New Kind of Science”. Internet.
Schmidhuber, Jürgen. A Computer Scientist´s View of Life, the Universe, and Everything. Internet, 1999.
Schmidhuber, Jürgen. Algorithmic Theories of Everything. Internet, 2000.
Toffoli, Tomaso; Margolus, Norman. Cellular Automata Machines: A New Environment for Modeling. The MIT Press, 1987.
Wolfram, Stephen. Cellular Automata and Complexity. Westview Press, 2001.
Wolfram, Stephen. A New Kind of Science. Wolfram Media, 2002. Disponible en Internet.