MENTAL
 Main Menu
 Union of Opposites
 MENTAL, a Simple Language Foundation of Complexity


MENTAL, a Simple Language Foundation of Complexity
 MENTAL, A SIMPLE LANGUAGE
FOUNDATION
OF COMPLEXITY

"Science is really the search for simplicity" (Claude A. Villee).

"In the deepest depth lies the simplest" (Ken Wilber).

"Everything is very difficult before it is simple" (Thomas Fuller).



Complexity vs. Simplicity

Complexity and simplicity are qualitatively opposite concepts that are connected:
Characteristics of complexity
Characteristics of simplicity
The challenge of complexity

The challenge of complexity is to establish a new, more general or universal paradigm:
Computational and Descriptive Complexity

Computational Complexity

The computational complexity of a mathematical entity is defined as "the length of the shortest program that generates it". It is also called "algorithmic complexity" or "Kolmogorov complexity" and is a measure associated with the degree of difficulty of specifying a mathematical entity by means of a program, algorithm or operational (constructive, computational) expression that generates that entity.

This definition makes no reference to:
  1. The processing time. The algorithm is supposed to run in a finite number of steps.

  2. The memory space occupied by the program (static space) and the space it needs during execution (dynamic space). The computational environment (theoretical or real) is assumed to be finite.

  3. The possibility that the algorithm is dynamic, i.e. self-modifying during execution.
To illustrate the concept of algorithmic complexity, let us look at the structure of the sequences: Algorithmic complexity depends on two factors:
  1. The language used.
    There is an invariance theorem that states that a computational language can emulate any other of the same type, i.e., all languages are computationally equivalent. Therefore, any language can be used, but the lengths of expressions in different languages will differ by at most a certain constant.

  2. The generation algorithm used.
    It is impossible to ensure that the algorithm used is the shortest possible. Only an upper limit can be set. Since alternative shorter (not known) algorithms could exist, there is what is called "algorithmic uncertainty" and therefore the algorithmic complexity is incomputable, i.e., undecidable.
In addition to algorithmic uncertainty (when at least one algorithm is available), uncertainty exists when we are unable to find any algorithm (e.g., for a sequence) and thus do not know whether or not that sequence is random. Chaitin's theorem proves that there exist undecidable sequences, in the sense that it is impossible to know whether or not they are random. This theorem is analogous to Gödel's theorem concerning formal axiomatic systems, in which it is proved that there exist undecidable sentences.

The concept of algorithmic complexity was first raised by Ray Solomonoff, defined by Andrei Kolmogorov and later extended by Gregory Chaitin, all during the 1960s.


Descriptive complexity

It is the measure associated with the difficulty of describing a mathematical entity. The descriptive complexity depends in this case on three factors:
  1. On the descriptive language used.
    Traditionally, the formal language of mathematical logic has been used.

  2. Of the descriptive expression used.
    There may be several valid alternative forms. Again, there is uncertainty, but of a descriptive type.

  3. Of the resolution, i.e., of the degree of detail of the description.
The topic of descriptive complexity starts in 1974, when Ron Fagin [1974] shows that the complexity of the NP class of problems is the same as the class of problems described in second-order existential logic [see Addendum]. This means that the computational complexity of an NP-problem can be understood as a function of the complexity of its logical description.

According to Fagin, there is the following correspondence between measures of descriptive complexity and the resources associated with algorithmic complexity:
  1. The depth level of quantifiers corresponds to the computation time in a parallel computing environment.

  2. The number of variables in the descriptive expression corresponds to the amount of hardware required (memory and processors).

Complexity according to Wolfram

A "new kind of science"

Stephen Wolfram published in 2002 "A New Kind of Science" (NKS), a voluminous book (1197 pages), the result of many years of research in the field of computation, in which he claims to have invented/discovered a universal scientific paradigm for modeling systems and for understanding the universe itself. Wolfram is the creator of "Mathematica", a mathematical calculation and graphing software, with which he did all his research. These focused on cellular automata (CAs) because he discovered that a set of simple, local rules, applied recursively, could generate forms or patterns of great complexity. This is explained by the fact that, although the rules are local, their effects (when the rules are applied recursively) spread throughout the system, affecting it globally. The publication of NKS provoked some controversy. For some it was no breakthrough, but a (somewhat exhaustive) presentation of things already known. For others, these ideas even threaten the very notion of science.

Wolfram's conclusions, after many years of research, can be summarized as follows:
The Principle of Computational Equivalence (PCE)

It is Wolfram's most important result or conclusion: "There are several ways of stating the PCE, but probably the most general is to say that almost all processes that are not obviously simple can be regarded as computations of equivalent sophistication." According to Wolfram, there is no fundamental difference between a 30-rule CA and a human mind, or between a hurricane and the entire universe. They are all computations of equivalent complexity. For Wolfram, the PCE is a law of nature, comparable in importance to the law of gravity, since it is shared by all processes in the universe.

The definition of PCE is somewhat cryptic, it can be interpreted as follows:
The principle of computational irreducibility

This principle has two aspects:
  1. Given a model consisting of a set of previously known rules, it may take an irreducible amount of computational work to discover the consequence of the model. That is, the final result cannot be predicted, being necessary to go through all the intermediate states.

    Some models have computational reducibility (the result can be predicted), but the most interesting models are those with computational irreducibility, since they make a richer and more creative use of resources.

  2. You cannot reverse engineer. That is, given the data, it is not possible to deduce or find the corresponding model (the set of rules).

Critique of NKS
Complexity in MENTAL

Simplicity Theory

Complexity theory should really be called "simplicity theory," for the following reasons:
MENTAL, a simple language foundation of complexity

MENTAL meets the criteria of the complexity challenge:
Unified definition of complexity

Since MENTAL is an operational and descriptive language, we can unify the two aspects of complexity theory (computational complexity and descriptive complexity) by the following definition: the complexity of an entity or mathematical expression is the length of the shortest expression that generates or describes it. Examples (the last column is the complexity):

TypeEntityExpressionComplexity
Atomaa1
Repeated element sequencesaaaaaaaaaaaaa( a★9 )7
(abc abc abc abc)( abc★3 )9
Infinite sequences of equal elementsaaaa...( a★ )6
(abc abc ...)( abc★ )8
PredefinedNull expressionθ1
Universal disjunctive expressionα1
Universal conjunctive expressionΩ1
Empty sequence()2
Empty set{}2
Distributions(au av aw)(a[u v w])16
(aub avb awb)(a[u v w]b])11
Finite
discrete
ranges
Sequence of natural numbers between 1 and 100( 1…100 )9
Sequence between n and n+10( n…(n+10) )12
(1 3 5 7 9 11)(1 3 … 11)10
Infinite
discrete
ranges
Natural numbers{ 1… }6
Odd natural numbers{1 3 …}7
Continuous
range
Real numbers between 0 and 1{⟨ rr≤0 ← r≤1⟩}16
FunctionFunction of two variables⟨(f(x y) = (x+y x*y))⟩22

Remarks:

Addenda

Distinction between information and computational complexity

Shannon presented in 1948 the modern theory of information. He defined the information I associated with a probability event P as. binary digit− is the unit of information and corresponds to the information associated with an event of probability P =1/2. The value associated with a bit is represented as 0 or 1.

If we have a sequence x of n bits and each element (0 and 1) has the same probability (1/2), the information associated to a specific string x is That is, the information is the length of the sequence.

If the sequence x is random, its algorithmic complexity matches the information: Shannon also introduced the concept of entropy (H) of information, in order to measure the degree of randomness (or disorder) of a string of equiprobable bits: where Fi is the relative frequency of occurrence of the event i, and m is the number of events.

Examples:

SequenceF0F1H
001000106/82/80.81
010101014/84/81
000000008/80/80

It follows that Shannon's formula is not suitable for quantifying randomness, since it is intuitively seen that the first chain is more random than the second, and yet its entropy is lower. The entropy formula, as follows from its definition, makes more sense when applied to bags (sets with repeated elements and where order is not taken into account).

The concept of algorithmic complexity is more generic than that of information because it can be applied to any mathematical entity, without considering probabilities or frequencies.


Second-order existential logic

First-order logic is logic that uses predicates or relationships between elements. The formulas are of the form P(x1,.... ,xn), where P is the predicate and x1,...,xn are the elements. Variable elements can be quantified (with the universal quantifier ∀ or the existential ∃), but predicates cannot be quantified.

Second-order logic is first-order logic, along with the possibility of quantifying predicates and specifying them by variables.

Second-order existential logic is second-order logic in which the variables of the predicates are quantified with the existential quantifier ∃.


Problem complexity classes

There are several classes of problem complexity, depending on the type of algorithm used to solve it, as well as the space (memory) and time resources required:

ClassAlgorithmSpaceTime
LDeterministicLogarithmic-
NLNondeterministicLogarithmic-
PDeterministic-Polynomial
NPNondeterministic-Polynomial
PSPACEDeterministicPolynomial-

Since Fagin's theorem refers to the NP class, let us explain the differences between the P and NP classes: A nondeterministic algorithm is one that does not run according to a fixed scheme, but varies according to decisions made during its execution. An example of an NP problem is the famous traveling salesman problem, where there are nondeterministic algorithms that solve it efficiently.

Every P problem is in NP (P ⊂ NP), but the converse (NP ⊂ P) has not been proved. The doubt remains as to whether P = NP.


Cellular automata vs. Fractals

There are several parallels and analogies between CAs and fractals: CAs constitute a universal paradigm, having been applied, like fractals, to a diversity of areas, especially for the generation of patterns in nature: shapes of seashells, crystals and snowflakes, pigmentation in animals, plant growth, phyllotaxis (arrangement of leaves on plant stems), formation of galaxies, etc. And also for robotics, fluid flow simulation, bioengineering, cryptography, self-organizing systems, nanotechnology, materials fracturing, chemistry (molecule formation), architecture, art, music, etc.


Origin of cellular automata

CAs were conceived in the 1940s by Konrad Zuse and Stanislaw Ulam.

Konrad Zuse published a paper in 1967 and later [1969] a book (Calculating Space) in which he claimed that the universe is the result of a discrete computational process running on a giant cellular automaton of deterministic type.

Ulam's contributions came in the late 1940s, shortly after he had invented (along with Nicholas Metropolis and John von Neumann) the Monte Carlo method, a statistical (non-deterministic) method used to approximate complex mathematical expressions.

John von Neumann, also in the 1940s, applied CAs (at the suggestion of Stanislaw Ulam) in the study of complex systems, specifically in the design of self-reproducing automata (automata that reproduce themselves) and reflected in his book "Theory of Self-reproducing Automata".


The Game of Life, by Conway

The game of life, created by John Conway, is a two-dimensional CA with two states (black or white) for each cell, which represents the paradigm of dynamic complexity from the simple: three very simple rules produce extremely complex dynamic results. The rules are:
  1. If a black cell has 2 or 3 black neighbors, it remains black.
  2. If a white cell has 3 black neighbors, it becomes black.
  3. In all other cases, the cell remains white (if white) or becomes white (if black).

Simplexity

Simplexity is a new theory that proposes a dual or complementary relationship between simplicity and complexity. Anuraj Gambhir, a visionary and innovator in the field of mobile techommunications, is credited with the creation of this term, which merges the terms simplicity and complexity. But it was the book "Simplicity: Why Simple Things End Up Complex and How Complex Things Can Be Simple" by Jeffrey Kluger [2009] that has perhaps contributed most to the popularization of this term.

In his book, Kluger analyzes technological and social systems. He also studies human behavior and the way we perceive things and events. He draws parallels between apparently disparate systems according to their degree of simplicity. He proposes a new theory of how things really work, a theory that he applies to a wide variety of subjects. He even claims that understanding the relationship between simplicity and complexity can improve people's lives. But missing from the book is how to make things, mainly technological products, simpler.


Plectics

Murray Gell-Mann −Nobel laureate in physics in 1969 for the development of the model of quarks, the component particles of nucleons (the protons and neutrons)− coined the term "plectics" to refer to an interdisciplinary domain of research in which aspects of simplicity and complexity are involved. More abbreviated, plectics is the science that studies simplicity and complexity, and their interaction with each other, in all kinds of systems.

Gell-Mann's inspiration for creating and naming this science was based on the difference between the simple laws governing the behavior of matter at the quantum level and the complexity resulting from the process of evolution. At the quantum level, at the deep level, particles have uniformity, they have no individuality, they are interchangeable. On the other hand, at the surface level there is diversity and individuality. In his work "The quark and the jaguar" [1995], the quark symbolizes the simple and the jaguar the complex, and he wonders how the simple can create the complex.

Gell-Mann chose the word "plectic" because it refers to both the simple and the complex. He recommends this term instead of "systems science" or "systems theory" because "system" is too generic a term. Gell-Mann was Distinguished Fellow at the Santa Fe Institute (New Mexico), an institute for interdisciplinary studies (which he co-founded), dedicated to the study of the complexity of natural, artificial and social systems. In particular, he studies complex adaptive systems, systems that are capable of learning, adapting, self-organizing, evolving and producing emergent behaviors; nonlinear dynamical systems, including chaos theory; biological evolution; ecosystem development; learning; the evolution of human languages; the rise and fall of cultures; the behavior of markets; etc. Gell-Mann declared himself opposed to pure reductionism. According to him, reductionism is not the way to do science, although he recognized that the search for simplicity is a useful criterion or method in the search for the fundamental laws of science. In this sense he opposed the reductionist ideas of scientists such as Steven Weinberg and Richard Feynman.


Edgar Morin's Complex Thought

Edgar Morin is the creator of the concept of "complex thinking" [Morin, 1990], a type of thinking necessary to deal with the complexity of reality and which has the following characteristics: Morin sets out 7 principles for the foundation of complex thinking:
  1. Systemic or organizational thinking.
    It is based on the part-whole relationship, following Pascal's philosophy: "I think it is impossible to conceive of the parts without knowledge of the whole".

  2. Hologrammatic thinking.
    The whole is inscribed in the parts and the whole is inscribed in the parts. There is a continuous dynamic between the whole and the parts. This type of thinking is different from holistic thinking, because holistic thinking is reductionist in that it only sees the whole.

  3. Feedback thinking.
    Cause acts upon effect and effect acts upon cause.

  4. Recursive loop thinking in self-production and self-organization.
    The key systemic concept is that of feedback loops or circular causality. There are retroactive loops (as in cybernetics), to define self-regulating processes, and recursive loops: products and effects are producers and causers of what they produce.

  5. Autonomy thinking that contemplates both open and closed systems.

  6. Dialogic thinking.
    To understand antagonistic notions. It unites opposites, which complement and coexist: difference and unity, autonomy and dependence, knower and knowledge, part and whole, order and disorder, system and environment (or ecosystem), etc. "It is necessary to religify what was considered as separate."

  7. The thought of reintroduction of the one who knows in the known.
    All knowledge is a reconstruction or translation. "Our worldviews are translations of the world."

The 10 laws of simplicity, by John Maeda

John Maeda −graphic designer, visual artist and computer technologist− proposes a humanistic approach to design in general, and technology in particular, through his 10 laws of simplicity [Maeda, 1966]:
  1. Reduce. Reduce by hiding the attributes and functionalities of a product, so that only those needed by the consumer or user appear. Quality should be built into a product, but not directly visible.

  2. Organize. An organized complex system makes it look simpler.

  3. Time. Saving time makes things seem simpler.

  4. Knowledge. Knowledge simplifies everything.

  5. Difference. Simplicity and complexity need each other. Simplicity is only perceived as a contrast to complexity. For example, a one-button device surprises and is seen as simple because more buttons are expected.

  6. Context. What lies on the periphery of simplicity is not peripheral at all, but very relevant.

  7. Emotion. A product must excite.

  8. Trust. You have to trust that simplicity is the safe way to design and communication.

  9. Failure. Sometimes failure is necessary to achieve the ultimate goal of simplicity.

  10. The only one. Simplicity consists of eliminating the obvious and adding the important and meaningful. According to Maeda, this law is a summary of the previous laws and is a simplification of the laws of simplicity. This law is based on 3 keys:

    1. Distance. From the outside, from far away, everything seems simpler. It allows us to see the forest (the simple) and not the trees (the complex).

    2. Openness. What is open inspires confidence and makes it appear simpler.

    3. Energy. Energy must be saved by using as few resources as possible.
Maeda describes in the book his struggle to understand the meaning of existence as a humanistic technologist. The book is written with the same principles he postulates, and is a guide to building a simpler world.


Bibliography