"While much human reasoning corresponds to traditional logic, a significant portion of common sense reasoning does not appear to be monotonic."(John McCarthy)
"'Logical' reasoning is not flexible enough to serve as a foundation of thought" (Marvin Minsky).
Concept
Classical logic is based on the following. Suppose we have a series of sentences, from which certain conclusions are inferred. If we add new sentences, there will be new conclusions, but the original conclusions are not altered, they are preserved in the extended set of conclusions. This property is called "monotonicity".
Non-monotonic logic is a type of logic aimed at managing dynamic inferences (or tentative conclusions) in the light of the (incomplete) information available at any given time. Conclusions are updated (removed, added or modified) with the appearance of new information.
Normally, adding new information reduces the number of conclusions. This also happens when a certain entity is described in a general way (e.g., "E is an animal") and then new characteristics are added (e.g., "E is a mammal", "E is a flyer", etc.), which progressively narrow its scope.
Monotonic logic vs. non-monotonic logic
The distinguishing characteristics are:
Classical logic is a static logic, in the sense that the initial conclusions remain with the appearance of new facts, although the conclusions may be increased. Non-monotonic logic is dynamic in the sense that inferences vary over time depending on the information available at any given time.
In monotonic logic information is complete and consistent. New facts are consistent with previous facts. In non-monotonic logic the information is incomplete and new information may not be consistent with previous information.
In monotonic logic, inferences are only sensitive to explicit information. In non-monotonic logic, inferences are sensitive to explicit information and also to its absence.
Monotonic logic is simple. Non-monotonic logic is a more complex logic because it involves making assumptions about unknown information and introducing additional conditions in the rules to make inferences go one way or the other. But the logical operators are the same. The complexity lies in the contents.
Monotonic logic is purely formal, rational and mechanical. Non-monotonic logic is a superior logic in the sense that it is based on knowledge and intuition; it is an epistemic logic.
Retractability vs. non-monoticity
Reasoning in which conclusions are drawn from incomplete information and assumptions, conclusions that can later be reversed when more information becomes available, are called "revocable," "refutable," or "retractable" reasoning (defeasible reasoning). Revokeable reasoning is common in a large number of fields such as: scientific research, design of devices (electronic or mechanical), diagnosis of situations or problems, etc.
Retractability and non-monotonicity are not synonyms. All non-monotonic reasoning is retractable. But the converse is not true: retractability does not imply non-monoticity because retractability may have been caused by changes in context.
The problems of non-monotonic logic
The problems that arise in non-monotonic logic systems are:
The problem of formalization of non-monotonic logic.
What kinds of techniques to apply to update the knowledge base.
How to control the updating of the knowledge base. Small changes can have far-reaching consequences.
How to resolve possible inconsistencies. There may be parts of the knowledge base that are consistent, but globally inconsistent.
How to derive the relevant conclusions to solve a problem.
The implementation problem. A frequent goal is to build an artificial intelligence system with non-monotonic logic efficiently, but formal approaches are not easy to implement.
Applications of Non-Monotonic Logic
Artificial Intelligence (AI)
Logic plays an essential role in AI. Classical logic has played an important role in computer science, but it is useless for AI. Normal human reasoning is non-monotonic. Therefore, AI systems must be able to reason non-monotonically. We know that the real world is a dynamic world, where things change. Therefore, knowledge is non-monotonic, i.e., each new knowledge comes to modify and even invalidate previous knowledge.
An AI system must have intelligent behavior: reasoning and inferring plausible conclusions, planning, making decisions in different types of situations, performing actions, etc., all based on the information it possesses at any given moment. To do this, a computer program capable of acting intelligently must have a model and a general representation of the world. "A computer program capable of acting intelligently in the world must have a general representation of the world in terms of which to interpret the inputs" [McCarthy & Hayes, 1969].
Common sense reasoning
Non-monotonic reasoning is closely related to the topic of formalizing commonsense reasoning, a key topic in AI. Common sense reasoning is human reasoning of a qualitative type accepted by a community.
John McCarthy was the first to emphasize the need and importance of formalizing common sense before there was any theory about it. This field of AI was inaugurated in 1958 by McCarthy in his paper "Programs with common sense" [McCarthy, 1958].
The frame problem
The frame problem is a problem posed by John McCarthy and Patrick Hayes in their 1969 paper "Some Philosophical Problems from the Standpoint of Artificial Intelligence" [McCarthy & Hayes, 1969]. The point is that an artificial agent must know what changes and what does not change as a result of an action it can execute. The things that do not change as a result of an agent action will normally be very numerous and it is difficult to have them all in the system database because they are, in general, not homogeneous. According to McCarthty, the context is too large an entity to be fully specified.
The frame problem involves predictive reasoning, a type of reasoning that is essential for planning and formalizing intelligent behavior: starting from the current situation, knowing the immediately subsequent states for each possible action to be taken.
A simpler and more efficient solution is to reason non-monotonically using the default rule that things do not change except when they are known or explicitly specified to be altered.
Default reasoning
In default reasoning (default reasoning) default properties are assumed as long as not otherwise specified, and conclusions are inferred. It is also called "generic reasoning" because it assumes generic or typical properties that represent "normality".
It is based on two principles:
The principle of generality. A distinction must be made between universal and generic statements:
A universal statement is valid for all the elements considered. For example, "All men are mortal."
A generic statement is valid for all typical or normal elements considered, i.e., those that have no special characteristics. For example, "Normal birds fly."
The principle of specificity. Specific information takes precedence over generic information in case of conflict.
Example:
Normal birds fly, although there are flightless birds, which are exceptions, such as penguins. If we know that P is a bird, we can assume that it flies because most birds fly. If it is subsequently known that P is a penguin, then we have to change the above assumption and say that P does not fly.
Abductive reasoning
Abductive reasoning is a type of reversible reasoning in which, starting from a fact, the most probable hypothesis, the simplest possible and the one that best explains the fact, is formulated. The hypothesis is introduced as a new rule that connects the hypothesis with the fact.
For example, if the grass is wet, then it most likely rained. This conclusion can be retracted if you subsequently observe that the roof is dry or if you know that a sprinkler was running.
Hypothetical statements are essential in the formulation of scientific theories and the scientific method, pioneered by Galileo and Francis Bacon. For Popper, hypotheses and scientific theories are conjectures that have to be refutable.
Non-monotonic inheritance
Non-monotonic inheritance is the problem of constructing hierarchies when exceptions exist [Horty, 1994]. Two principles apply:
The inheritance principle.
When we have knowledge organized by taxonomies, a class is assumed to inherit the properties of higher classes, as long as no exceptions are specified.
The principle of specificity.
Specific information takes precedence over legacy information in case of conflict. This principle applies when there are exceptions to the inheritance principle. There may be higher order exceptions (exceptions of exceptions).
For example:
Mammals do not fly.
Bats are mammals.
Conclusion: Bats do not fly.
Exception: Bats do fly.
New conclusion: Bats fly.
Exception to the exception: Baby-bats don't fly.
New findings: Non-baby-bats fly. Baby-bats do not fly.
Formalisms of Non-Monotonic Logic
Default logic (default logic)
Default logic [Reiter, 1980] uses implications that supplement first-order predicate logic with rules of the form
(p ∶ q)/r
being p the known, q a justification and r is the consequent. Its meaning is: "If p is known and is consistent with the assumption of q, then it implies r". In the simple cases, q and r are equal.
The following notation is commonly used for different types of inference:
Strong or strict inference: A ⇒ B (from A it follows B).
Weak or default inference or assumption: A → B (A usually implies B). This inference is retractable.
Negative inference: A ↛ B (A does not imply B).
For example:
penguin ⇒ bird ⇒ flies
penguin ↛ does not fly
The principle of specificity applies: the specific prevails over the generic. For example:
A ⇒ B → C prevails over A ⇒ C A ↛ B prevails over A → B
Constituency
Developed by John McCarthy [1980], it is a formalization based on the use of general sentences that cover most of the cases, to later delimit (circumscribe) by means of other sentences to specify exceptions or particular cases.
Circumscription is based on the following principles:
The principle of generalization. A generic statement is valid for all typical or normal items considered, i.e., those that have no special characteristics. Negative general sentences are not included. For example, "penguins do not fly".
The principle of specificity: specific information prevails over generic information. It serves to specify exceptions.
The minimization principle. It is about minimizing the extensions of infrequent (or abnormal) predicates. The extension of a predicate is the set of sentences where the predicate appears as true.
Practical rules are given that circumscribe abnormal predicates so that they apply only to those entities that are known to be abnormal, with the information available at any given time.
McCarthy used circumscription to try to formalize common sense and the frame problem. He also used it to formalize the principle of inertia: things do not change unless otherwise stated.
McCarthy formalized circumscription using second-order logic to minimize the extent of abnormal or infrequent predicates. This logic allows predicates to have associated quantifiers (or variables) in sentences.
For example:
All normal birds fly
Non-normal birds do not fly
All birds are normal
Penguins are birds
P is a bird
Conclusion: P flies
P not normal
New conclusion: P does not fly
Conclusions (including facts): P is a bird, P is a penguin, P is flightless
Closed World AssumptionClosed World Assumption (CWA)
This formalism, due to Raymond Reiter [1978], consists in presupposing that every true sentence is known. Therefore (by modus tollens), what is not known is false. It is particularly useful for reasoning about databases that are assumed to be complete.
Examples:
A travel agent has access to a database of flights and has to answer a client about whether there is a direct flight from one city A to another city B. The query to the database gives as an answer that there is or there are no direct flights.
We have a database of a company's personnel, which we assume to be complete. A query as to whether a given employee works for the company produces either "yes" or "no" as the answer.
This formalism is a purely formal process. It involves non-monotonic inference, since the addition of new information may produce a different answer.
In the open world assumption formalism (Open World Assumption, OWA), not known does not imply falsity.
Self-epistemic logic
Autoepistemic logic [Moore, 1985] is based on the idea that we can make inferences from our introspective knowledge by reasoning about our own beliefs. For example, I can infer that I don't owe a million euros to anyone, because if I did I would know.
In this type of logic knowledge is represented by practical rules with implications about our beliefs.
Logic programming
In logic programming, a program consists of a set of rules of the form.
r ← p1, ... , pn, no-q1, …, no-qm
For example,
flies ← bird, non-bird, non-pinguin
(a bird other than a penguin flies)
In logic programming, the principle of "default negation" is used: if you do not explicitly specify that an element does not have a property, then it does. In the example above, if a bird has not been specified that it is not a penguin, then it is assumed to be a penguin.
The logic programming language Prolog, developed by Colmerauer and his team [1973], was the first language to incorporate non-monotonic logic.
Suppose P is a bird. By default, it is not a penguin. Therefore, applying the first rule, P flies. The other two rules do not apply.
Suppose P is a penguin. The first rule does not apply, but the other two do. Therefore, P is a flightless bird.
The Non-Monotonic Logic in MENTAL
With MENTAL everything is simpler, clearer and more direct because we work directly with primary archetypes, the archetypes of consciousness:
Unification.
From the deep level everything is interrelated, everything is contemplated as the same thing: artificial intelligence, non-monotonic logic, common sense reasoning, modal logic, databases, knowledge representation, logic programming, event-driven programming, and so on. At a deep level, everything is unified.
Monotonic logic vs. non-monotonic logic.
It is often claimed that non-monotonic logic is not deductive logic. This is not true. Logic has two aspects: decision logic (the one based on the concept of "condition") and the logic of inference or deduction. Non-monotonic logic is a logic that is initially not deductive (or partially so) and that has to be completed by assumptions or hypotheses in order to become deductive. The formalization is the same. It only depends on the contents used in each case.
Formalisms.
All formalisms of non-monotonic logic can be expressed with MENTAL. If some formalism could not be expressed in MENTAL, then MENTAL would not be universal.
Formalization.
Formalization of a system of non-monotonic logic is accomplished by generic expressions (the knowledge base) and non-generic expressions (the fact base), both bases residing in abstract space.
Rules are generic expressions that are permanently activated to automatically infer conclusions. The "engines" of the inferences are the generic expressions of the type "condition → action". There may be higher order rules (meta-rules). The reasoning or process is always forward (or top-down). Facts are also considered conclusions. Both facts and generic expressions can be modified.
One possibility available in MENTAL is to apply a factor (between 0 and 1) to indicate the degree of truth of an expression. For example,
〈( x/bird → 0.8*(x/fly) )〉
which is also interpreted in a probabilistic sense: 80% of the birds fly.
Generic expressions are continuously applied until a stable situation of abstract space is reached. Actually, they continue to be applied all the time, but nothing is changed. It may also happen that the situation of the abstract space remains dynamic, and may or may not have a pattern of dynamicity. This situation is analogous to the issue of the result of a process in a Turing machine.
Execution.
It is not a matter of working out logical systems and then implementing them on a computer in a certain language. MENTAL expressions are directly executable. There is unification between formalization and implementation.
The execution of a non-monotonic logic system is essentially no different from a standard process containing generic expressions involving dynamicity (those using substitution and condition primitives) acting on expressions in the abstract space.
Artificial intelligence.
McCarthy was the first to propose logic as a language for AI: to formalize common sense and to represent knowledge. But AI must be grounded in the archetypes of consciousness, not just logic. To use logic alone in AI is to make the same mistake as trying to ground mathematics in logic alone (logicism), the mistake made by Frege and Russell. Primary archetypes are the foundation of everything, including mathematics and AI.
MENTAL supplies, not the representation of the world (which McCarthy called for), but something deeper: the deep structure of all representation, both internal and external.
The frame problem.
The frame problem is a pseudo-problem. This problem disappears because the frame is the abstract space, where expressions can change and conclusions change automatically. An AI program is not aware of what it is doing or its environment. It is the programmer's responsibility to control the actions and their effects, both internally (the program itself) and externally (the abstract space).
The context, the framework, the abstract space with its contents are always taken into account. Context and knowledge are represented with the same language. In traditional logic there is no clear formalization of context.
The true-false question.
Premises and conclusions are treated in non-monotonic logic in terms of true or false, as in deductive logic. In MENTAL expressions are treated in terms of existence or non-existence or with a degree of existence. When expressions refer to the external world, then existence can be made equivalent to truth.
The sentence "All birds fly" is false, because there are birds that do not fly.
The sentence "Birds fly" is neither true nor false. It is partly true and partly false, although it is more true than false.
The sentence "Normal birds fly" is true. The predicate "normal" is introduced in "Birds fly" to make it true so that deductive logic can be applied.
Logic programming.
MENTAL allows you to specify all kinds of declarative expressions, not just logical ones, both generic and specific.
In short, with MENTAL it is very easy to specify a non-monotonic logic system from the formal point of view. Once again, everything is clearer, more direct and intuitive, thanks to the possibilities of the language, mainly the generic expressions, which allow to infer consequences at any time and to modify the expressions of the abstract space.
Non-monotonic logic is not a new logic. The underlying logic does not change. It is a logic extended with assumptions or hypotheses and with meta-rules to describe principles (such as generality, specificity and inheritance). Once extended with these contents, the resulting logic is a deductive logic.
Non-monotonic formalisms do not solve the problems of reasoning in AI. AI must be approached from degrees of freedom (the primary archetypes), not just from the logical point of view.
Generalization
According to the principle of generalization, the concept of "non-monotonic" has to be general, i.e., apply not only to logic. In general, a non-monotonic system is one in which the abstract space changes every time new facts (non-generic expressions) are added. A non-generic expression can modify a generic expression, as in the following simple example:
Program:
(x := 1)
〈( (y = b) ←' x=1 → (y = a) )〉
Initial situation:
(x = 1) (y = a)
Addition of a new fact and modification of an existing one:
(x = 2)
(z = u)
New situation:
(x = 2) (y = b) (z = u)
Adenda
Origin of non-monotonic logic
non-monotonic logic arose from the realization that first-order predicate logic was inadequate for handling commonsense reasoning, retractable inferences, and for generally addressing the problems posed by AI. Its pioneers were John McCarthy (one of the fathers of artificial intelligence), Drew McDermott, Jon Doyle and Raymond Reiter.
Non-monotonic logic has developed mainly in the field of AI, specifically on the topic of formalizing common sense reasoning. Today it is considered an essential part of AI.
In 1980, the Artificial Intelligence Journal published a monographic issue (number 13) on the theories and formalisms of non-monotonic logic, an event that is considered the beginning of the era of this new logic.
The three types of reasoning
According to Peirce, there are three types of reasoning:
Deductive or top-down.
Classical logic, monotonic logic, applies. Conclusions are drawn from premises. It is purely rational, from the mode of consciousness of the left side of the brain.
Inductive or bottom-up.
From various facts a general conclusion is inferred that explains all the facts. It is intuitive, from the mode of consciousness of the right side of the brain.
Abductive.
Combines the previous two. Non-monotonic logic is applied. It tries to explain a fact by means of an explanatory hypothesis. Peirce called abductive reasoning "guessing".
Induction relies on the regularity of facts. Abduction works with knowledge of an unexpected, infrequent, or abnormal fact, the cause of which has not yet been determined, and which needs an explanatory hypothesis.
Abductive reasoning is intuitive and rational. It is, therefore, associated with consciousness and creativity. It tries to make reality intelligible by formulating hypotheses that attempt to give a rational explanation to a fact or phenomenon. This type of reasoning is what makes the progress of science possible. Consciousness is manifested or expressed in the rule that unites the opposites (the hypothesis and the fact).
Bibliography
Antoniou, Grigoris. non-monotonic Reasonning. The MIT Press, 1997.
Bidoit, N.; Hull, R. Minimalism, justification and non-monotonicity in deductive databases. Journal of Computer and System Sciences, 38: 290-325, 1989.
Brewka, G. non-monotonic Reasoning: Logical Foundations of Commonsense. Cambridge University Press, 1991.
Brewka, G.; Dix, J.; Konolige, K. non-monotonic Reasoning. An Overview. CSLI Publications, 1997.
Burks, Arthur W. Peirce’s Theory of Abduction. Philosopphy of Science, 13: 301-306, 1946.
Cadoli, M.; Schaerf. A survey of complexity results for non-monotonic logics. Journal of Logic Programming 17: 127-60, 1993.
Colmerauer, Alain; Kanoui, Henry; Roussel, Philippe; Pasero, Robert. Un système de communication homme-machine en Français. Groupe de recherche en Intelligence Artificielle, Marseille, 1973.
Gallaire, H.; Minker, J. (eds.). Logic and Databases. Plenum Press, 1978.
Ginsberg, M.L. (ed.). Readings in non-monotonic Reasoning. Morgan Kaufmann, 1987.
Harel, D. Dynamic Logic. In D. Gabbay and F. Guenthner, eds., Handbook of Philosophical Logic, vol. II: Extensions of Classical Logic, chapter 10. pp. 497-604. Reidel, 1984.
Harel, D.; Kozen, D.; Tiuryn, J. Dynamic Logic. MIT Press, 2000.
Horty, J. F. non-monotonic Logic. In Goble, Lou, ed., The Blackwell Guide to Philosophical Logic. Blackwell, 2001.
Horty, J.F. Some direct theories of non-monotonic inheritance. In D. M. Gabbay, C. J. Hogger, and J. A. Robinson, editors, Handbook of Logic in Artificial Intelligence and Logic Programming 3: non-monotonic Reasoning and Uncertain Reasoning. Oxford University Press, Oxford, 1994.
Lukaszewicz, W. Non-Monotonic Reasoning. Ellis-Horwood, 1990.
McCarthy, John. Applications of Circumscription to formalizing Commonsense Knowledge. Artificial Intelligence, 28 (1); 89-116, 1986.
McCarthy, J. Programs with common sense. In Proceedings of the Teddington Conference on the Mechanization of Thought Processes, pages 75-91, London, 1959.
McCarthy, J. Circumscription. A form of non-monotonic reasoning. Artificial Intelligence, 13: 27-39, 1980.
McCarthy, J. ; Hayes, P. Some philosophical problems from the standpoint of artificial intelligence. In B. Meltzer and D. Michie, editors, Machine Intelligence, pp. 463-502. Edinburg University Press, 1969.
McCarthy, John. Programs with common sense. Symposium on Mechanization of Thought Processes. National Physical Laboratory, Teddington, England, 1958.
McDermont, D; Doyle, J. non-monotonic logic I. Artificial intelligence, 25: 41-72, 1980.
Moore, R.C. Semantical considerations on monotonic logic. Artificial Intelligence, 25: 75-94, 1985. Reimpreso en Ginsberg, 1987.
Reiter, R. Nomonotonic reasoning. Annual Review in Computer Science 2: 147-186, 1987.
Reiter, R. On closed world data bases. En H. Gallaire y J. Minker (eds.) Logic and Data Bases, 55-76. Plenum Press, 1978.
Reiter, R. A logic for default reasoning. Artificial Intelligence, 13: 81-132, 1980. Reimpreso en Ginsberg, 1987.