MENTAL
 Main Menu
 Applications
 Mathematics
 The P vs. NP Problem


The P vs. NP Problem
 THE P vs. NP PROBLEM

One of the 7 millennium problems

"The most important question in all of computer science and one of the most important in mathematics" (Lance Fortnow).

"A gift to the mathematics of computer science" (Steve Smale).



The P vs. NP Problem

The theory of computational complexity

Computational complexity theory studies the complexity associated with problems that are solved computationally. To do so, it studies all the possible algorithms that can be used and also classifies problems according to the computational resources required, which are basically of two types:
  1. The time required to solve the problem, which is proportional (or directly related) to the number of elementary steps or operations.

  2. The space or memory required.
We speak of time and space complexity, respectively. Other measures of computational complexity that can be considered, depending on the type of problem, are: the number of processors, the amount of communication data, the number of logic gates, etc.

Computational complexity theory differs from other related theories and concepts:
Decidable and undecidable problems

A decidable problem is a problem whose outcome is one between two possible values: "yes" or "no", 0 or 1, true or false, and so on. For example, the problem of finding out whether or not a number is prime is decidable.

An undecidable problem is a problem whose solution would in principle also have two possible values, but which is impossible to know. The paradigmatic example is the famous problem of stopping a Turing machine: given a Turing machine M and an input w, determine whether M will eventually stop after a finite number of steps. Alan Turing proved that this problem is undecidable. Another example is Gödel's incompleteness (or undecidability) theorem: in a formal axiomatic system involving the natural numbers, there are statements that cannot be proved true or false.

Hartmanis & Stearn [1965] proved the existence of decidable problems with arbitrarily high computational complexity. They used for this purpose a diagonalization method, a method that goes back to Cantor, who used it to prove the non-numerability of the real numbers.


Types of computational problems

According to the time required to solve them (time complexity), computational problems that depend on a variable x, associated with the magnitude of the time complexity of the problem (usually, the size of the input), can be divided into two classes:
  1. Computational problems that are solved in polynomial time, i.e., the time required to reach the solution or result is of the type c·xk , or more generally, of the type c0 + c1 ·x + c2·x2 + . .. + ck·xk, where the ci are constants. Time complexity, in general, is usually expressed as O(xr), where r is a real number (only the power of greatest exponent is specified and where the letter "O" means "of the order of"). The simplest problems are of the type O(x), i.e., of computational time proportional to x. Polynomial-time algorithms are also called "tractable" or "feasible". Examples:

    • The sum of two natural numbers of length n is solved in polynomial time, in particular in time proportional to 2n: O(n).

    • The product of two numbers of n digits requires a time proportional to n2: On2). There is an optimized method that requires only O(n1,585).

    • Given a natural number n, find out if it is prime. Its time complexity is O(n).

    • Ordering a set of n elements has time complexity O(n2), although there are also optimized methods that lower the exponent.

    • Searching over a sequence of ordered elements (e.g., a dictionary), is of time complexity O(ln n).

  2. Problems that need an exponential time to be solved, that is, a time of the type k·cx, where k and c are real numbers. This function can grow extraordinarily fast as x increases. The time complexity is O(cx). Combinatorial type problems usually have this type of complexity. Exponential time algorithms are also called "intractable" or "infeasible."
Examples of this second type are:

The classes P and NP

The class of problems that can be solved in polynomial time is called the "P class".

The class of problems whose solution can be verified in polynomial time is called "class NP".

According to these two definitions, P is a subset of NP (P ⊂ NP), because if it is possible to solve a problem in polynomial time, the solution obtained can also be verified in polynomial time.

It may happen that for certain P problems, the verification time is the same as the solution time. For example, verifying the sum of two numbers requires performing the sum itself. So, paradoxically, it is sometimes easier to verify the solution of a complex problem than that of a simple problem.

Within the NP class there are easy problems (NP-easy or NP-easy), those that are solved in polynomial time, and hard problems (NP-hard or NP-hard), those that require exponential time. According to this definition, NP-hard = P.


The question P vs. NP

The problem that arises is whether P = NP or whether P ≠ NP, i.e., whether all problems whose possible solutions can be computed in polynomial time, is equal to or different from the problems that can be verified in polynomial time.

For example, the subset sum problem is of type NP, but since an efficient algorithm that solves the problem in polynomial time has not yet been found, we do not know whether it is of type P or not.

The question P =? NP is both profound and basic. It is one of the most important open questions in computer science, because of its large theoretical, practical, and even philosophical implications, and it raises several questions: One possible way to show that P ≠ NP is to try to find a problem or a class of problems in NP−P, i.e. that is NP but not P.

In any case, whether P and NP are the same or not, trying to find the relationship between P and NP is fundamental to understanding the deep nature of computation.


The NP-complete class

The subset sum problem, the commuter problem, and the box-box problem are combinatorial problems that share an interesting property: essentially, at bottom, "they are all the same problem," in the sense that, if an efficient algorithm exists for one of them, efficient algorithms would also exist for the others. This class of problems is called "NP-complete" and they are the most difficult problems of the NP-class. No efficient algorithm has yet been found for any of these problems. If an efficient algorithm were found for an NP-complete problem, it would imply that P = NP.

The discovery of the NP-complete class constitutes a milestone of enormous importance, for several reasons: Apart from the above three problems, other NP-complete problems are: The problem of factoring a natural number, the graph isomorphism problem, and the discrete logarithm problem are NP-problems, but they have not yet been proved to be NP-complete.


Non-NP problems

In the subset sum problem, if only one subset that sums to zero is asked for, it is very easy to verify that it is a solution. If all subsets that verify this property are asked for, verification requires repeating the process again to see if all solutions have been included. In this case, the (exponential) solution time and the verification time are the same, as is normally the case with P problems.

And the same is true for the commuter problem. If we are only required a path that runs through all the cities with a total length less than a certain given length D, it is enough to check that the path is cyclic, that all the cities are there and that the sum of distances is less than D. But if we are asked for the minimum cyclic path, the verification requires repeating the resolution process.

These types of problems, which require exponential time in both solving and verification, are not NP problems, so they are called non-NP problems.


General scheme of problem types

ResolutionVerificationType of problem
PolynomialPolynomialNP-easy = P
ExponentialPolynomialNP-hard ⊃NP-complete
ExponentialNon-NP


The two alternatives (P vs. NP) and their consequences

If P ≠ NP, then: If P = NP, then the consequences would be very important, among them:
Some thoughts and opinions on the question P vs. NP
The Ways to Decrease Computational Complexity

There are two ways to try to decrease the computational complexity of problems: the practical way and the theoretical way.

The practical way tries to overcome the limitations of current digital computers by using alternative physical computational models, such as the following: The theoretical way tries to use special efficient algorithms, such as the following:
The P vs. NP Problem from the MENTAL Paradigm

General Issues
The question of the two modes of consciousness

The P vs. NP question is, once again, a particular case of the universal bipolarity, represented or symbolized by the characteristics of the two modes of consciousness of the cerebral hemispheres. In this case, the poles are the NP-easy and NP-difficult computational problems. In effect: There is a continuous gradation, there is no concrete boundary between the two, but a diffuse one. Indeed, the NP-easy vs. NP-difficult problem is fuzzy if posed in terms of efficiency, since it makes efficient and polynomial time equivalent. But polynomial-time algorithms are not always feasible or tractable. For example, an algorithm requiring n100 steps could not be executed in reasonable time, even with a value of n small (e.g., 10), because the resulting number (10100) is larger than the number of atoms in the universe (1077). And not every exponential time algorithm is intractable. The boundary, then, between polynomial time and exponential time is somewhat fuzzy.

To find an efficient algorithm for a problem is to bring it to consciousness. But you have to gain ground on the unconscious. This is the process of science, consciousness and evolution. If an NP-hard problem cannot be solved as NP-easy, we must try to optimize it, lowering as much as possible its degree of time complexity.

For easy problems (those of polynomial time) it is sufficient to use the ratio. Difficult problems (those of exponential time) lead us to wide spaces, of many possibilities, that overflow our normal capacity of analysis, so we must contemplate them from a higher point of view and use creativity, intuition and imagination, applying for example heuristic type rules that generate understanding. Brute force is searching blindly, aimlessly, without awareness. When there is a heuristic, the search for solutions is simplified, there is direction, more awareness of the problem, because in the heuristic the objective is encoded.

Therefore, the P vs. NP problematic is nothing more than the universal problematic of the two modes of consciousness. The meeting point of both modes is to be found in the primary archetypes, in the universal semantic primitives that represent the supreme level of abstraction.


The question of the computational model

The Turing machine is the computational model commonly used as a reference in the P vs. NP question. This model, introduced by Alan Turing in 1936, is theoretical and was introduced before the first digital computers were built. The Turing machine does not consider the aspect of computational complexity; only if algorithms can be implemented that are capable of producing a result, without limit of time or space (the unlimited tape of the machine). The Turing machine sets the limits of computation, and no alternative computational model can exceed this limit.

However, this model is superficial, low-level, detail-oriented. It is not conducive to creativity, so necessary for solving NP-hard problems. The alternative is MENTAL, a language that provides a new computational model based on degrees of freedom, flexible, open and creative. Maximum creativity emerges from the deep, from the level of supreme abstraction. From this model, all problems are simpler, everything is clarified and simplified because everything is contemplated from the gates of consciousness: the universal semantic primitives. MENTAL brings a new paradigm, a unifying paradigm that allows us to see with new eyes old problems, in particular the P vs. NP problem.

MENTAL is a conceptual, theoretical, ideal language (like the Turing machine), which is independent of the available physical resources. For example, if certain resources (such as parallelism of expressions) do not exist, they must be simulated. It is, in short, the problem of competence (or theoretical computational power) versus efficiency (practical computational power with the available resources).

With MENTAL you can create more efficient and creative algorithms than traditional ones thanks to its flexible high-level computational resources, among which we can highlight: Thanks to these resources, computational complexity should decrease, in the sense of requiring fewer temporal and spatial resources.


Conclusions
  1. P is not equal to NP, because NP-easy and NP-difficult are two poles or extremes of complexity. But both kinds of complexity share the same computational essence, the same primal archetypes, the universal semantic primitives.

  2. The true nature of computation arises not from solving the P vs. NP problem, but from identifying the deep, high-level abstraction resources that are the universal semantic primitives.

  3. The P vs. NP issue is independent of the ZF axiomatics of set theory. But there is a certain analogy, for universal semantic primitives play a role analogous to that of the axioms of set theory. They are semantic axioms.

  4. With MENTAL we do not obtain all the expected benefits if it were true that P = NP, but we do obtain the following:

    • It reveals which are the computational degrees of freedom, which are those that define the limits of what is computable (and describable) and which represent the true essence of computation.

    • Complexity decreases. Everything is easier, everything is simplified and clearer.

    • It provides a privileged point of view, of maximum awareness, from which to contemplate computation. In fact, computation is a manifestation of the archetypes of consciousness.

    • Creativity is encouraged. Creativity cannot be automated but can be programmed. Creativity is inherent in the orthogonal combinatorial nature of primitives.

    • It provides a universal paradigm from which all kinds of specific paradigms can be implemented.

    • Operational and descriptive complexity are united in the same conceptual framework (they are two aspects of the language).

    • Algorithms are very compact (their algorithmic complexity is lower).

    • Artificial intelligence is simpler to develop.

    • It is valid for all problems, whether NP or non-NP. It is universal. We see all problems with the same "glasses", which provide us with the maximum computational resources available. It is up to us to combine those resources in a creative way to optimize as much as possible each problem posed. The greater the complexity of the problem, the greater the degree of creativity required to solve it.
Ultimately, the problem of complexity is the problem of finding computational simplicity, simplicity that is achieved with the archetypes of consciousness. Paradoxically, computational complexity theory is the theory of computational simplicity.



Addenda

The Clay Institute Award

On May 24, 2000, the Clay Institute of Mathematics (based in Cambridge, Massachusetts), on the occasion of a conference in Paris, offered a prize of one million dollars to the person who solves one of the "7 millennium problems", problems that the mathematical community considers both important and difficult to solve. The year 2000 was declared "World Mathematics Year" by Unesco, one hundred years after Hilbert enunciated his famous 23 problems at the International Congress of Mathematicians, also held in Paris.

In 2003, the Russian mathematician Grigori Perelman solved one of them: the Poincaré conjecture (now turned into a theorem). The other 6 remain unsolved. One of them is precisely the P vs. NP problem. But it is believed that if this problem were solved, the others would probably be solved as well.


A little history
Deolalikar's proof of P ≠ NP

Vinay Deolalikar, an Indian mathematician working at HP Research Labs in Palo Alto (California), published a paper on 6 August 2010 in which he claimed to have proved that P ≠ NP. His proof is based on the fact that if certain problems, which are known to be NP-complete, were also in P, then they would have impossible statistical properties. Deolalikar's paper, which has been updated several times to correct some errors that have been detected in it, is very complex (it is 102 pages long) and it will take some time for his proof to be fully verified and accepted.

What is interesting about the article is that the author uses a new research method, as he combines two different approaches: logic and statistical physics. He moved the computational problem P vs. NP to the world of formal logic, using the "finite model theory". He focused on a SAT problem (Boolean satisfiability) from the following perspective: if k is the number of logical expressions of n Boolean variables and a number of n binary digits are chosen at random, what is the probability that this number satisfies these logical expressions? If k is very large and the number of possible solutions is small, there will never be correct solutions, no matter how much time is spent. If k is small and n is large, there will always be solutions. At a certain intermediate point between the two extremes there are solutions and a rich structure of probability distributions. According to the author, since the constraints of polynomial-time algorithms are too large to produce this rich structure, he deduces that P ≠ NP.


Bibliography