"The computer was invented to help clarify the philosophical question of the foundations of mathematics" (Gregory Chaitin).
"The most essential part of mathematics lacks a basis" (Niels Henrik Abel).
"Nature cannot be complicated. One must look for the source of truth in mathematical simplicity" (Einstein).
The foundation of mathematics at the conceptual level has a rich and long history that goes back from the ancient Greeks to the present day. What are the essential concepts on which mathematics is founded and how they are combined remains unknown today. Historically, the foundation of mathematics has been based successively on 5 concepts: numbers, sets, structures, categories and functions.
The Numbers
In the 6th century B.C., Pythagoras and his school (the Pythagoreans) based all mathematics on the concept of natural number and arithmetic operations with natural numbers. Rational numbers were a concept derived from the natural numbers. According to the Pythagoreans, numbers are the basic components with which the edifice of reality is constructed.
In the Pythagorean school, what is known today as the "Pythagorean theorem" was proved. This generated an interest in square numbers and, in general, in polygonal numbers, which could be represented by polygons. Thus, for example, there were triangular, square, pentagonal, etc. numbers.
The impact produced by the discovery of irrational numbers −in particular, the impossibility of expressing the diagonal of a square of side 1 by a rational number − provoked an even greater interest in geometric objects and their combinatorics through operations. For example, a number was represented as a segment, addition was performed by concatenating two segments, the product of two numbers equaled the area of a rectangle, and so on. There seemed to be more truth in geometry than in arithmetic. But arithmetic and geometry were two sides of the same coin. Arithmetic represented the discrete, the superficial and rational, and geometry the continuous, the deep and the intuitive (or irrational, from the numerical point of view).
The paradigm of the union or connection of arithmetic with geometry is the Pythagorean theorem, the best-known theorem in mathematics. Although Pythagoras discovered it for two dimensions, today we know that it is a general theorem applicable to spaces of n dimensions for objects between 1 and n−1 dimensions. [see Appendix - The Generalized Pythagorean Theorem].
In the 3rd century BC, Euclid published his famous "Elements" of Geometry and introduced the axiomatic method. In addition to geometric theorems, he also includes results that can be framed in what today we call "number theory", and he does so by grounding them in geometry, because he could not do it purely arithmetically.
In the Middle Ages negative numbers are introduced to solve, for example, the equation x+5 = 3. At first they were considered imaginary numbers. Their introduction was extraordinarily slow, due to a certain refusal to consider them as numbers, since the real numbers were only the natural ones. It was at the end of the 18th century when negative numbers were universally accepted.
In the 16th century, Gerolamo Cardano discovered imaginary numbers by dealing with the impossibility of dividing the number 10 into two parts such that their product was 40, arriving at the "impossible" expressions 5+√(−15) and 5−√(−15) . The imaginary unit was introduced by Raphael Bombelli (also in the same 16th century), when solving the cubic equation by the square root of −1. In the following centuries (XVII to XIX), the imaginary unit is interpreted as another dimension of the plane (John Wallis, Caspar Wessel, Jean-Robert Argand and Gauss).
In the 17th century, Descartes reduced geometry to the analysis of real numbers, associating coordinates to geometric entities and expressing the relationships between coordinates by equations (e.g. a straight line is an equation of the first degree). Thus analytic geometry was born.
In the same 17th century, Newton and Leibniz invented the infinitesimal calculus, thus giving birth to mathematical analysis. It is based on a paradoxical concept: an infinitesimal number is a number greater than zero but smaller than any positive number, no matter how small. This number, evidently, does not exist at the real level, so it is necessary to appeal to the imaginary.
In the 19th century, Weierstrass arithmetizes mathematical analysis by reducing irrational numbers to mere arithmetic on the natural numbers. Indeed, an irrational number, being impossible to define exactly, is defined by an infinite set of rational numbers defining successive approximations. However, this system is only applicable to a very small number (practically negligible) of irrational numbers: those that could be described in a finite way. The rest of the irrational numbers are inexpressible, that is, they are inaccessible and unrepresentable.
At the end of the 19th century, Cantor discovers the "transfinite numbers": the infinite numbers of higher order.
In 1888, Dedekind publishes "What are numbers and what are they for", where he axiomatizes the arithmetic of natural numbers.
In 1899, Giuseppe Peano formalizes the natural numbers by means of 3 first notions and 5 axioms.
In the 20th century, John Conway invents (or discovers) surreal numbers, a class of numbers that includes real numbers, infinity, infinitesimal and transfinite numbers.
The Sets
Since ancient Greece, mathematicians had studied individual objects (a number, a triangle, a square, etc.). In the mid-19th century, Bernard Bolzano began to study groupings of objects that shared a common property, such as numbers greater than 7, all triangles, etc. Bolzano is considered the precursor of set theory (and also of modern logic).
In 1874, George Cantor developed set theory, at the conceptual level, as the foundation of the whole mathematical edifice, especially for solving mathematical problems combining geometry and arithmetic. One of these problems was to find out the number of points contained in a straight line. It was precisely the notion of infinity that led him to the idea of set. His theory he called the "theory of aggregates". In this theory:
Sets could be defined and operations could be performed on them.
He used the so-called "principle of comprehension": for every property P there exists a corresponding set C(P) of elements having that property.
Earlier, George Boole had taken the first steps in an operational sense in his 1854 work "Inquiry into the Laws of Thought", for his algebra of logic, based on the logical operators of "union" (or disjunction), "intersection" (or conjunction) and the operator "contrary" (or negation) were applicable to the empty sets (symbolized by 0) and the universal set (symbolized by 1).
In 1902, Gottlob Frege developed a set theory equivalent to Cantor's, but at the axiomatic level. Frege intended to reduce mathematics to logic, and to deduce or explain numbers and arithmetic from set theory. This theory, known today as "naive set theory," is based on only two simple principles:
The extension principle.
A set is completely determined by means of its constituent elements. Two sets with the same elements are considered equal.
The principle of intension (or understanding).
A set can be specified by a property. The elements of the set are the objects that possess that property.
In 1903, Bertrand Russell demonstrates that the principle of understanding used by Frege is inconsistent −inconsistent means that contradictions can be derived, and from a contradiction anything can be derived −, hence the qualification of "naive" to the theory, providing as an argument a paradox that bears his name (Russell's paradox), in which he uses a self-referential mechanism: "Is the set R whose elements e are all the sets that are not elements of themselves, an element of itself or not? ". The paradox is formally expressed thus:
If e∈R, then e∉R.
If e∉R, then e∈R.
With Russell's paradox Frege's purpose of grounding arithmetic with set theory collapsed.
To eliminate this paradox several solutions were proposed:
Russell's own so-called "theory of types" (1908).
Russell applied this theory in the work Principia Mathematica, which he elaborated together with Alfred North Whitehead. In it, levels or types of sets are distinguished. A set of one type can only contain sets of a lower type. Therefore, a set cannot be a member of itself.
Distinguish between sets and classes (Von Neumann):
A set is a class that is not an element of another class.
A class cannot be contained in a set or in another class.
Sets are particular classes that can be members of a class.
All sets are classes, but not all classes are sets.
Classes that are not sets are called proper classes.
The class of all sets that do not belong to themselves is a proper class.
Anyway, the distinction between class and set is not clear. Moreover, despite all these conceptual precautions, it seems that the class of all classes that do not belong to themselves also leads to contradiction, as in the case of sets.
The iterative conception of sets.
According to this system, a set is obtained from the repeated application of certain formation (or constructive) principles, which give rise to an infinite hierarchy in which there is no ultimate or final level. The conjunctive universe is an open universe, in the sense that it can never be completed, as is the case with the natural numbers.
In 1908, Ernst Zermelo presented the first list of consistent axioms of set theory. The axioms set constraints on defining sets and operating with them so that contradictions do not occur. Only sets that satisfy the axioms are valid. In order to construct sets, the prior or initial existence of at least one first set and at least one second set are required.
initial existence of at least a first set and also a series of operations (union, Cartesian product and power) that are analogous to arithmetic operations (addition, product and power of numbers). Among Zermelo's axioms are:
The axiom of infinity. It establishes the existence of infinite sets (which are necessary for the reduction of analysis to arithmetic).
The axiom of choice. It states that in any collection of disjoint sets (with no elements in common), another set can be constructed consisting of only one element from each of the sets in the collection. This axiom is obvious for finite sets, but it is intended for infinite sets.
The separation axiom. It is a relativized version of the principle of comprehension: given a property P and a set C, there exists the subset of elements of C that have the property P. This subset can be the empty set (symbolized by {} or ∅).
Zermelo's list of axioms was updated in 1921 by Abraham Fraenkel, adding to it the axiom of substitution (which states that the values of a function defined on a set also form a set). Since then, the axiomatic system of set theory is called the Zermelo-Fraenkel system (in short, ZF).
Fraenkel also proved the independence of the axiom of choice from the rest of the axioms, so that it plays in set theory a role analogous to Euclid's fifth postulate in geometry (the famous postulate of parallels). The ZF theory with the axiom of choice is called ZFC ('C' for 'Choice').
In 1923, Thoralf Skolem added another axiom: the axiom of grounding: a set cannot belong to itself.
In the 1960s Alexander Grothendieck added another axiom to guarantee the existence of the set of successive powers of an infinite set. Later new axioms were added to guarantee even larger sets, known as "large cardinals".
Critique of set theory
Set theory had a great impact and changed the landscape of mathematics. It is considered the most important mathematical concept, the foundation of mathematics, but this concept suffers from limitations:
The concept of set was historically defined without considering two fundamental properties:
The repeated elements.
The order of the elements.
Although in nature no two elements are really the same, this is how they are considered from the point of view of their properties or attributes. For example we speak of 2 objects, 3 books, 4 white balls, etc.
In mathematics, they are called:
Bags to the sets with property 1.
Sequences to the sets with properties 1 and 2, i.e., to the bags with property 2 (ordered bags).
The following table illustrates the possible combinations when considering or not considering repetitions and order:
Repetitions
Order
Name
No
No
Set
No
Yes
Ordered set
Sí
No
Bag
Yes
Yes
Sequence (Sorted Bag)
When there are no repeated elements, then a set is a particular case of bag, and an ordered set is a particular case of sequence.
Lack of inverse operations.
For union and intersection of sets.
There is the subtraction of sets A−B, which are the elements of A that do not belong to B, but not the sum. It is assumed that the equivalent of addition is union, but this is not true, since these two operations are essentially distinct (addition is an arithmetic operation and union is an operation between sets).
Lack of combinatorial operations.
There are no combinatorial type operations to define, for example, from sets and sequences, new sets and sequences. There only seems to exist the mechanism of "Cartesian product" of several sets C1×...×Cn, which is another set: that of the sequences formed by the corresponding elements of the sets. This combinatorial poverty is evidenced by the impossibility of expressing variations, permutations and combinations only by means of mnemonic notations (Tm,n, Pn, Cm,n) and by resorting to natural language.
Loss of origin.
In the operations of union C1∪C2∪C1∩ C2 of sets, the elements of the resulting set lose their origin, that is, the name of the set from which they come. There is, therefore, loss of information that is a consequence of the non-existence of an explicit link between each element and the set to which it belongs.
The power set.
It is defined in set theory the power set of a set C: the set of all possible subsets of C. And the notation used is C* (Kleene). Sometimes the notation 2C is also used, since 2n is the number of elements of the power set of a set of n elements. Both notations are merely symbolic and specify neither the way of obtaining such a power set nor its description.
Critique of the Zermelo-Fraenkel (ZF) axiomatics of set theory
Axiomatic set theory is considered the "mother of mathematics", the fundamental pillar of mathematics. Most mathematical objects (numbers, relations, functions, etc.) are defined in terms of sets.
Set theory has been of great help in formalizing many domains of mathematics. It has also contributed to the discovery of completely new fields, such as the field of transfinite numbers.
ZF set theory is the best known and most widely accepted set theory today, so it is considered the "standard" set theory. But philosophically it is unsatisfactory, since it does not answer fundamental questions:
There is no underlying philosophical conception. It is made up of "patches" (to avoid paradoxes, to contemplate infinite sets, etc.). The axioms only specify restrictions or conditions of existence. They do not speak of degrees of freedom. They do not answer fundamental philosophical questions, in particular their relation to philosophical categories.
Axioms, which are the foundations of a theory, are supposed to be self-evident. But this is not the case with axiomatic set theory. Some axioms are controversial and debatable. There are no solid arguments in favor of the truth of the axioms.
It is an unintuitive theory. The axioms are artificial, unnatural and arbitrary. This last aspect was already pointed out in his day by von Neumann.
It is a first-order axiomatization of set theory (it uses first-order logic), i.e., only sets, elements and subsets are considered. The language of set theory is that of first-order predicate logic, which is a very limited language.
It is an excessively complex theory technically, so it is irrelevant in its practical application. It is a theory that is divorced from practice. It does not indicate how sets are constructed. In particular, the axiom of choice is the least constructivist thing there is.
It does not allow for fuzzy sets, i.e., there are no nuances or degrees of membership of an element to a set. It is based on a restricted concept: true-false dualism.
It is a theory that is oriented only to the proof of theorems from axioms.
The concepts are not explicit and delimited. They are blurred and implicit among the axioms.
It is a theory that aims to formalize the sets from a non-formalized language such as natural language, which is ambiguous and subject to many interpretations. There is no complete formal language, it only relies on the first order logic language.
There are no "loose" elements (that do not belong to any set). Everything must be a set. The ultimate constituent is the empty set. An element a must appear as {a}, the set containing the element a. This goes against common sense, since a set should be a grouping of at least two elements.
The axiom of choice should only be applicable to infinite sets when a descriptive mechanism is used. But the theory does not mention this aspect.
The axiom of separation precludes the existence of the universal set U, the all-inclusive set. Indeed, if it existed, using the property of not belonging to itself, and applying the separation axiom, there would exist the set W (the set of sets that do not belong to themselves). Since W is contradictory, U cannot exist.
This implies that the complementary set C' of a set C does not exist, since the union of both would be the universal set.
Set theory and predicate logic are intermingled. There is no clear separation of concepts. It is not clear whether logic is a branch of mathematics, whether mathematics is an extension of logic, or whether logic grounds mathematics.
The natural numbers are constructed by sets, which is counterintuitive: numbers are not sets.
Anyway, the major and resounding criticism of axiomatic set theory (and axiomatics in general) was Gödel's incompleteness theorem (1931) which showed that any consistent formal axiomatic system that includes the natural numbers is incomplete, i.e., that there are propositions for which it cannot be proved whether they are true or false, since they are unattainable through axioms. He also showed that no formal axiomatic system can prove its own consistency.
The Algebraic Structures
In the 1930s, a group of (mainly French) mathematicians, known under the collective name of Nicolas Bourbaki, elaborates a structural theory of mathematics, which they reflect in a series of books (10 in total), between 1935 and 1998, under the general name of "Elements of Mathematics" (name clearly inspired by Euclid). A summary of his philosophy was published in 1949 in the article "Fundamentals of Mathematics for the Practical Mathematician", in which he argued that all mathematics could be grounded by the notion of algebraic structure.
An algebraic structure generalizes the structure of the set of real numbers and the properties of the operations of addition and product:
Both operations are associative and commutative.
The product is distributive with respect to the sum.
There are two identity elements (0 for the sum, 1 for the product).
There is an inverse element in both operations (except for the zero in the product).
The generalization is based on:
Using structured sets, as opposed to simple sets, which are flat, without internal structure.
The structure is based on internal operations (between the elements of the set) that fulfill certain properties and on relationships between the elements (order, connectivity, separability, etc.).
Functions between structured sets preserve structure.
Examples of algebraic structures are: monoids, groups, rings, and bodies. The real numbers are a particular case of a body.
Structuralist theory in mathematics inspired the movement of structuralism, a generalist movement that involved several disciplines, such as linguistics, psychology, and anthropology.
Limitations of algebraic structures
The relations between the elements of the set are static. They cannot be modified.
There are only two types of internal operations.
Internal operations are functions limited to two arguments (elements of a set) and that produce an element of the same set.
No processes can be performed.
The Categories
The concepts of set and structure are too restrictive. In some domains they could not be applied, so a higher level of abstraction and generalization was necessary, which led to category theory.
Category theory is a mathematical theory that deals abstractly with mathematical structures and their relationships, with the aim of investigating what many domains have in common.
A category is an abstract concept, like that of a set. But in a set there are only vertical relationships: between each element and the set to which it belongs. There are no horizontal relations, that is, between the elements of the set. In a category, these horizontal relations do exist (the so-called "morphisms" or "arrows"), so we can imagine it as a kind of structured set.
The concept of category
A category consists of two parts: 1) a class or collection of objects, 2) some binary relationships (called "morphisms" or "arrows") between all objects in the class. For every pair of objects (the same or different) of the class, there is a set (can be empty) of morphisms that transform one object into another.
The notion of composition of morphisms generalizes fundamental concepts of mathematics, such as addition, product and exponentiation, as well as the basic mechanisms of logic.
A further abstraction was the concept of functor (functor). Just as sets can be related to each other by means of functions, it is also possible to relate categories to each other by means of functions that preserve their structure.
Analogies set theory - category theory
The following analogies can be drawn between set theory and category theory:
Characteristic
Set Theory
Category Theory
Name of the group
Set
Class or Collection
Components of the group
Elements
Objects
Relationship (binary)
Vertical (belonging of an element to the set)
Horizontal (between two objects of the class)
Relationship value
Binary value: True (T) or False (F)
Set of morphisms between two objects (can be empty set)
Origin of category theory
The concept of category was introduced in 1945 by Samuel Eilenberg and Saunders MacLane with the publication of the article "General Theory of Natural Equivalences" [Eilenberg & Mac Lane, 1945]. They found that all structures shared a number of common features:
They refer to classes of sets.
There are defined internal functions, which relate some elements to others.
Functions can be composed associatively.
There is always at least one function, which is the identity function.
From this they deduced that there was no need to mention sets explicitly and that mathematics could be based solely on the concept of function and composition of functions, instead of the classical concepts of set and membership. In this way a more generic framework was created in which sets and structures would be particular cases of categories.
Around 1960, Alexander Grothendieck introduced the concept of "topos" (a word that is in the singular, the plural is "toposes" or "topoi"), a generalization of the concept of sheaf from algebraic geometry.
In 1996, William Lawvere extracted the logical structure of the topos concept and introduced axioms that led to its current notion. Lawvere's purpose was to construct a higher-order logic in terms of category theory, but the result was a kind of intuitionistic logic. According to Lawvere, category theory, especially the theory of toposes, unites analytic mathematics and intuitionistic mathematics.
Lawvere's final proposal is to ground mathematics in a reflexive concept: the category of categories, which includes category theory itself, set theory, and logic.
Critique of category theory
Category theory, because of its abstract and generic character, has many advantages:
It has helped to relate, connect, organize, reconceptualize and unify many domains of mathematics. The result has been a certain dissolution of the boundaries between those domains, and that general results obtained in category theory carry over as particular results to those domains.
It offers economy of thought and expression, dispensing with details.
It has forced to question the very distinction between mathematics and metamathematics (it really unifies both aspects), as it affects the very foundation of mathematics.
It has contributed to bring mathematics and philosophy closer together, by raising many epistemological and ontological issues, as well as the theory of universals.
Category theory clarifies why set theory does not serve as a foundation for mathematics: because first-order predicate logic is too simple and cannot be applied to complex domains such as topology, algebraic geometry, etc. Intuitionistic logic is adequate for these domains.
Category theory is currently playing a role analogous to that once played by set theory, providing a new vision and new foundation for mathematics, and contributing to its evolution on the path of increasing abstraction, generalization, and unification. The categories are intended to transcend, connect, unify and systematize the different domains of mathematics by applying abstract and generic concepts, and thus reveal their common deep structures.
However, numerous objections can be raised to category theory:
Despite the undoubted achievements of category theory, it remains a somewhat artificial theory, and it is doubtful whether it will ever truly become the foundation of mathematics. Although category theory claims to be the foundation of mathematics, no complete set of axioms has yet been presented in this respect.
Category theory represents the culmination of the search for the most general and abstract ingredients of mathematics, but it has moved away from conceptual simplicity. Category theory is a clear example of the violation of Einstein's razor principle. [see Fundamentals - Principle of Simplicity].
The concept of category is not a "pure" concept, but a concept composed (or dependent) of two others: the concept of set (or collection of objects) and the concept of morphism (or arrow) and furthermore used in a restricted way:
The set appears in the class (or collection) of objects. Moreover, there are multiple conceptions of "set", depending on the axiomatic under consideration (with/without the axiom of choice, etc.).
Morphism is a particular and simple case of function. It is a restricted concept, not generic enough. It lacks the process component, since morphism only establishes a direct correspondence between two objects.
When different types of objects (e.g., sequences) or different types of arrows (e.g., arrows of arrows) are needed, one must turn to the higher-dimensional categories. But the categories do not provide all possible combinatorics. For example, sets or sequences of arrows, mixed sets or sequences of objects and arrows, etc. are not considered.
Attributes cannot be assigned to categories, objects, arrows and functors.
The concept of arrow or morphism is ambiguous. It is also said to be "multifaceted". It only states that an object A is matched by an object B, but does not specify its meaning. In fact there may be different interpretations of the concept of arrow [see Comparisons - MENTAL vs. Category Theory].
The foundations of category theory are not yet clear. The theory is not conceptually consolidated and is currently still evolving. Even the very definition of category has changed over time, depending on objectives or needs.
There is no absolute or universal category.
Category theory is heir to the axiomatic tradition. It has a Hilbertian (purely formal) and not a Fregean (conceptual) orientation, and mathematics needs both views.
There are no constructive procedures. There are only definitions and no formal language. The emphasis is on the "what" (the descriptive) and not on the "how" (the constructive). For example, there are no explicit mechanisms for constructing object structures such as sequences, trees, matrices, etc.
Regarding the theory of toposes, it must be said that it is too complex and abstruse. And it imposes an even more restricted model, since it relates in a specific way many concepts.
There are numerous ways of thinking about higher order categories. One should also consider the categories themselves as particular cases of a higher category. In this case, what kind of category is this? Since the category of categories is also a category, it would have to include itself, so we end up in a Russell-type paradox, i.e. self-referential.
To solve the problem of categories of categories, three solutions have been proposed:
Consider only "small" categories, i.e., categories that are sets. The category of small categories generalizes the notion of class of all sets, but does not include the category of sets nor the category of structures.
Add a new axiom to set theory so that hierarchies of classes (classes of classes, etc.) can be considered. In this way it is possible to obtain categories whose components are classes and hierarchies of classes, but without arriving at the category of all categories. This solution was proposed by Grothendieck.
Axiomatize category theory itself, as was done with set theory. The axiomatic system of set theory would be a particular case of the axiomatic system of category theory when considering discrete categories (categories whose functions are only identity functions). This was the solution proposed by Lawvere in 1996.
The Functions
None of the three previous approaches (sets, structures, and categories) was satisfactory on a practical level, especially for computer scientists, who needed to work with functions (defining them as processes with parameters). Category theory uses functions that are simple relations between objects, functions that can be combined and that do not admit parameters. A general theory of functions was needed to enable their practical application.
Computer scientists turned to a theory developed in 1933 by Alonzo Church, the lambda calculus, a formal theoretical model for functional expressions, i.e. functions defined from other functions. The concept of function has mechanisms that have a certain analogy with set theory:
A function is a mathematical entity that corresponds to that of set.
An argument of a function corresponds to an element of a set.
The application of a function with an argument corresponds to the membership of an element in a set.
The definition of a function corresponds to the definition of a set, the principles of:
Extension. A function is completely determined by specifying all its values (arguments and results). And two functions with the same values are equal.
Intension. A function can be defined by describing its input and output values (arguments and results).
Lambda calculus functions are mathematical objects with the following characteristics:
Functions have no name (they are anonymous).
Functions can be arguments of other functions.
The result of a function is another function.
There is referential transparency: the meaning of a functional expression depends only on the meaning of the subexpressions.
There is a clear difference between the definition of a function and its application.
Variables (or parameters) are also considered functions or autofunctions, that is, they apply to themselves and return to themselves.
Partial evaluations (specifying values to not all parameters) can be performed.
The numbers are functions (Church's numerals), functions that emulate arithmetic operations.
The truth values (T and F) are functions and allow implementing the logical condition (If-Then-Else).
You can define functions that perform logical operations, predicative functions (that return a truth value) and recursive functions.
Higher order functions can be defined.
Kleene and Rosser (1936) showed that the lambda calculus was inconsistent. Church then developed (1940) a functional theory of types, a simpler and more general theory than the one that appeared in Russell and Whitehead's Principia Mathematica. Church's original lambda calculus had no types, so the functions could be applied without restriction.
In Church's type theory, functional expressions are classified into types, which are categories of functions and which play a role analogous to that of set types in set theory. Types restrict the possible combinatorial forms of such expressions, as in set theory.
Church had at first a much more ambitious goal: to construct a complete formal system and a universal language for modeling all of mathematics. But when he saw that Russell's paradox was affecting him, he downgraded his initial goal to focus exclusively on modeling computability by means of functional expressions.
When Church invented the lambda calculus, computers did not yet exist. However, it can be considered the first functional language in history. It has been the formal theoretical model for many functional languages, having had great influence on the design of programming languages in general. Lisp was the first language to apply the lambda calculus, making it the oldest and most popular functional language. It is oriented to symbolic computation and is mainly applied to artificial intelligence topics.
In 1969, Dana Scott presented "denotational semantics for lambda calculus", which interpreted computer programs (built with functions) as true mathematical objects and thus showing that computer science could be considered a branch of mathematics. Dana Scott was the first to define the semantics of programming languages.
Limitations of functions
Most authors agree in attributing to Descartes the paternity of the abstract concept of function, although it was Leibniz who later introduced the term "function" (or its Latin equivalent), in 1964. The creative leap taken by Descartes was to conceive the function as an algebraic relation. This introduction of algebraic language as a language in which to express relations in a compact way constituted a great revolution in mathematics and a key concept in the development of science.
A function f, as defined in mathematics, is a correspondence (or application) between each element of a source set A (or domain) and an element of a target set B (codomain or range). When the source set is the Cartesian product of n sets A = A1×A2×. ..×An, we have a n-adic function, where n is the number of arguments of the function.
This function definition has certain limitations:
There is no associated process, computation or algorithm for obtaining the result from the arguments.
The number of arguments is fixed. There is no possibility that one or more arguments do not exist (i.e., there are no optional arguments) nor that the number of parameters is variable.
The arguments and the result of the function are always elements of sets.
Arguments are positional and argument names do not appear.
A function cannot return another function as a result.
Limitations of the lambda calculation
The lambda calculus overcomes these limitations of mathematical functions, but it has limitations as well:
It is a monoparadigm theoretical model: "everything" is a function. Therefore, every information structure has to be created every time it is needed. The only information structure it considers is the sequence within a functional expression.
Considering numbers and logical values as functions is a mere theoretical exercise. Conceptually, numbers and logical values are not functions.
Analogously, the logical condition (to implement decision logic) is something entirely different from a function.
Conclusions
The Tale of the Blind Men and the Elephant
The story of the foundation of mathematics through its fundamental concepts recalls the story of the blind men and the elephant, the famous Indian folk tale attributed to Rumi, a 13th century Persian Sufi.
In a village, there was a group of blind men who were friends, and who occupied their time discussing things that were happening in the world. One day, the subject of the "elephant" came up. None of them had ever seen an elephant, so they asked to bring an elephant to them to find out what it looked like. One touched its side, another its tail, another its leg, another its trunk, another its ear, etc. Then they gathered to discuss what they had "seen". One said, "An elephant is like a wall" (for he had touched its side). "No, it is like a rope" (he had touched its tail), said another. "You are both wrong," said a third, "it is like a column supporting a roof" (he had touched a leg). "It's like a python snake," said the fourth (he had touched the trunk). "It's like a blanket," said the one who had touched the ear. And so they went on and on arguing.
The conclusion or moral of this story is that we cannot understand things based only on partial aspects. That the true nature is revealed when we contemplate the totality, when we have a global vision.
When Pythagoras affirmed that the essence of reality is numbers, what he really did was to discover an aspect of the "mathematical elephant," a dimension of inner and outer reality, that is, an archetype of consciousness. But reality consists of more dimensions. Mathematics is multidimensional.
Another dimension is the set dimension, the dimension that Cantor discovered. This dimension is essentially different from that of numbers, despite many attempts to unify them or to consider numbers as a concept derived from sets. And pretending to base mathematics on the concept of sets is like sitting in a one-legged chair, or like building a house only with sand.
Algebraic structures, categories and functions are very important concepts of mathematics, but it is impossible to base mathematics on any of them because they are not primitive concepts, they are derived concepts.
The attempt to base mathematics on the theory of categories must be considered unsuccessful because of its complexity. Moreover, the definition of category is too restrictive, since it imposes limits to the freedom of mathematical expression. And also the concept of topos is too complex and restrictive. Mathematics cannot be based on something complex and restrictive, but on simplicity and freedom. If something is complex at base is that it is ill-conceived.
In short, the foundation of mathematics must be based on simple, deep and universal concepts, not on superficial or partial concepts (as in the story of the elephant), on archetypes that facilitate the maximum degree of freedom and creativity. And with which it is possible to construct and describe all kinds of concrete mathematical expressions (numbers, sets, functions, structures, etc.), as well as categories of mathematical expressions.
MENTAL, the theoretical-practical foundation of mathematics
With MENTAL, the problem of the foundations of mathematics is solved in a simple way:
Instead of formal axioms, 12 semantic axioms are used, which are universal semantic primitives, primary archetypes, philosophical categories, degrees of freedom and semantic dimensions. These axioms correspond to generic concepts and define the limits of the expressible.
These axioms are like the basic instructions of a computer, of a "mental", philosophical and universal computer. The metaphor of the computer with general instructions clarifies the foundation of mathematics. "The computer was invented to help clarify the philosophical question of the foundations of mathematics" (Gregory Chaitin).
The foundation is both theoretical and practical. Every expression uses primitives, thus connecting the internal and the external.
Mathematical paradoxes are resolved, including Russell's paradox. Or, rather, mathematical paradoxes disappear because they are simply expressible as fractal expressions.
We contemplate the 4 concepts with which it has been pretended to found mathematics:
The numbers are implicit in the operation of addition and its dual (subtraction). The concept of number is indefinable. We have only arithmetic. "Arithmetic does not talk about numbers, but works with numbers. Arithmetic is the grammar of numbers" (Wittgenstein. Philosophical Investigations).
The set is contemplated directly, and also its dual (the sequence).
Categories are associated with the generalization of all kinds of mathematical expressions and not only of classes (or sets) of objects and the relations between these objects. One can define categories of functions, of sets, of sequences, of vectors, of tensors, of matrices, etc. and, obviously, higher order categories.
Functions are derived expressions. They can be defined in many ways by means of primitives.
MENTAL is a universal formal language that underlies mathematics. It is the realization of Church's old dream of creating a universal formal language to model all of mathematics. It is a universal paradigm, capable of expressing all kinds of paradigms, including functional.
John Wheeler used to say that "No physical theory that deals only with physics will ever explain physics". This statement can be generalized: "No theory that deals only with mathematics can ever explain mathematics". In this sense, MENTAL is on a higher level than mathematics, so it underlies all formal sciences (computer science, cybernetics, artificial intelligence, systemic, etc.).
Bibliography
Aczel, Amir D. El Artista y el Matemático. La historia de Nicolas Bourbaki, el genio matemático que nunca existió. Gedisa, 2009.
Alexandrov, A.D.; Kolmogorov, A.N.; Lavrent´ev, M.A. Mathematics. Its Content, Methods and Meaning. Dover, 1999.
Benacerraf, Paul & Putnam, Hilary (eds.). Philosophy of Mathematics. Selected Readings. Cambridge University Press, 1984.
Cantor, Georg. Fundamentos para una teoría general de conjuntos. Escritos y correspondencia selecta. Crítica, 2005.
Corfield, David. Towards a Philosophy of Real Mathematics. Cambridge University Press, 2003.
Courant, Richard; Robbins, Herbert. ¿Qué son las matemáticas?. Conceptos y métodos fundamentales. Fondo de Cultura Económica, 2002.
De Lorenzo, Javier. La matemática: de sus fundamentos y crisis. Tecnos, 1998.
Dou, Alberto. Fundamentos de la matemática. Editorial Labor, 1970.
Eves, Howard. Foundations and Fundamental Concepts of Mathematics. Dover, 1997.
Ferreirós, José. El nacimiento de la teoría de conjuntos. Universidad Autónoma de Madrid, 1993.
Garcíadiego Dantan, Alejandro R. Bertrand Russell y los orígenes de paradojas de teoría de conjuntos. Alianza, 1992.
George, Alexander; Valleman, Daniel J. Philosophies of Mathematics. Blackwell, 2005.
Guiness, I. Grattan. The Search for Mathematical Roots, 1870-1940. Princeton University Press, 2001.
Halmos, Paul R. Naive Set Theory. Martino Fine Books, 2011.
Hersh, Reuben. What is Mathematics, Really?. Oxford University Press, 1999.
Hintikka, Jaakko. The Principles of Mathematics Revisited. Cambridge University Press, 1996.
Russell, Bertrand. Los principios de la matemática. Espasa Calpe, 1967.
Tymoczko, Thomas. New Directions in the Philosophy of Mathematics. Princeton University Press, 1998.