One of the central themes in Artificial Intelligence, Cognitive Science and Philosophy
"A new, deep epistemological problem, accessible in principle, but unnoticed by generations of philosophers" (Daniel Dennett).
Origin of the problem
The frame problem is a significant example of the limitations of the logical paradigm of Artificial Intelligence (AI), the so-called "logicist AI", a branch of AI that attempts to formalize human reasoning (including common sense reasoning) by applying exclusively mathematical logic.
The problem was first raised by John McCarthy and Patrick Hayes [1969] in their famous paper "Some Problems from the Standpoint of Artificial Intelligence".
McCarthy and Hayes posed the problem as the difficulty of expressing with first-order predicate logic that which does not change (that which persists) in a dynamic domain as a consequence of actions taking place in that environment. To solve this problem they developed the so-called "situation calculus", an application of first-order predicate logic that made it possible to represent cause-effects and to reason taking time into account (temporal reasoning). In the calculus of situations appear attributes of objects that do not change (persist) and others that change (are dynamic) as a consequence of actions.
Three concepts appear in the situation calculus:
Situations. A situation is a state of the system at a given time.
Fluents. A fluent is a condition that can vary over time.
Actions. They are the generators of new situations.
A simple example, illustrating the spirit of the frame problem, is the following:
We have an object with two attributes, color and position, and two possible actions: paint it one color or move it to another position. If the object is painted, it does not change its position. And if the object is moved, its color does not change.
This example, which at the human level is very simple and common sense, is not possible to express by classical predicate logic.
To solve the problem, McCarthty and Hayes proposed "frame axioms" to specify things that do not change in a dynamical domain. For example: "With the action of moving, the color does not change". Without such axioms, a system is not able to deduce which attributes persist.
But it happens that a large number of frame axioms may be needed to specify the attributes that do not change under different actions. Indeed, a frame axiom would be needed for each action and attribute that does not change, which would be tedious, inelegant and inefficient. Moreover, it may happen that, under concurrent actions, some axioms are false.
The frame problem consists in trying to find a synthetic way to specify attributes that do not change in a dynamic environment, without having to specify frame axioms.
One way to simplify the problem is to apply what is called "the common sense law of inertia": a property does not change (persists) unless otherwise specified in a frame axiom. But this general assumption has drawbacks:
It cannot be formalized with the resources of traditional formal logic.
It does not contemplate the concurrence of actions.
There may be actions with non-deterministic effects.
There may be actions with indirect ramifications.
There may be conflicting rules. An example is the so-called "Yale shooting problem" [see Addendum].
The term "framing problem" derives from the technique used in cartoons (framing), where there is a fixed scene and on which the animated parts are superimposed. In this context, the actions specify the things that change, while the rest (the frame) remains unchanged.
Solution approaches to the frame problem
Many solutions exist today to formalize the frame problem. Most of them are expressed in an AI language, such as Lisp or Prolog. Two in particular stand out:
Use a non-monotonic (or non-monotonic) logic, i.e. a logic where in a dynamic environment it is possible to retract a conclusion when new information is added. It is also possible to specify exceptions, restrictions to predicates, etc. Classical logic is monotonic: the set of conclusions obtained from information always increases with the addition of new information. An example of non-monotonic logic is circumscription, created by John McCarthy [1980, 1986] to formalize common sense.
Use a procedural approach with memory. With it, the frame problem is simplified, since the law of inertia of common sense is naturally implemented by a language operating on a memory reflecting the situation of the environment. If an attribute changes, it changes only in memory, the others remaining unchanged. But in addition to a procedural language, we need the language to allow the use of the linguistic resources of traditional logic, mainly the rules (generic and specific). Ideally, there should be a single language with procedural and logical resources.
Other solution approaches are based on using heuristic rules, using causal connections, using frames (Minsky), scripts (Schank), etc.
The various conceptions of the frame problem
The frame problem as presented by McCarthy and Hayes was too restricted. Today, the frame problem has been generalized, it is given a broader interpretation in AI. Anyway, there are different opinions and interpretations about it:
It is the persistence problem, i.e., attributes that do not change. It is the original problem of the framework.
It is the problem of knowing how a single piece of information affects the rest of the database (or environment).
It is the problem of determining which elements of a system have been altered after the occurrence of an event.
The problem of determining the consequences of an action; what changes, what does not change and how it affects the predictive model.
It is the problem of representing the effects of an action without having to explicitly represent the detail of the non-effects.
It is the problem of describing in a synthetic way how an action affects or does not affect the environment without having to specify all possible collateral effects, distinguishing between relevant and non-relevant information.
The frame problem is also considered from the philosophical point of view, specifically from epistemology. It is what is called the "epistemological frame problem". The first philosopher to refer to this problem was Daniel Dennett [1978], who managed to popularize it.
It is also considered from the point of view of cognitive science, the science that deals with how information is represented in the mind/brain and transformed into knowledge.
Actually, the frame problem, whether from the AI, philosophical or cognitive point of view, is the problem of meaning or consciousness. It is a problem whose paradigmatic example is John Searle's Chinese box metaphor. In this sense the framework can be identified with consciousness or with common sense.
For example, if we have a robot acting in a dynamic environment, the frame problem is in this case the problem of adaptation of the robot to the environment. The behavior of the robot at any given time is determined by the knowledge base (or model of the world) and the input from the environment (the modification of the environment). The robot must ignore irrelevant changes (with respect to the immediate previous situation) and consider only the significant ones directly related to its task or goal.
To do this, the robot should "understand" or be aware at a general or global level of what is happening at all times, of each new situation, and distinguish between relevant (or useful) information and that which is not. He must also be aware of the consequences of his actions. To do so, he should distinguish between superficial (or particular) meaning and deep (or general) meaning. Actually, the problem of the framework is the creation of a computational model that mimics human behavior, including common sense reasoning.
The implications and connections of the frame problem
The frame problem is not an isolated problem in AI. It is linked to all major issues in AI:
The general problem of knowledge representation.
The problem of reasoning.
The formalization of common sense.
Temporal reasoning: what changes and what does not change with time.
The question of automatic inferences.
The creation of a computational model of the mind.
Concurrent actions.
Detection of collateral effects and implicit changes.
Branching: new unanticipated situations that may arise. For example, when a robot knocks down a block a new problem emerges to which it must react.
The problem of knowledge base adaptation. In particular, the problem of the qualification or categorization of rules to know which ones to apply and which ones to ignore in a given situation, and even the creation of new rules.
The predictive problem: foreseeing in advance what is going to happen or may happen in order to take the corresponding preventive actions.
The general (not detailed) specification of actions (e.g., rent a car, go to Barcelona, etc.).
Gaps, i.e., periods when you do not have complete knowledge of what is going on.
Etc.
MENTAL's Solution to the Frame Problem
In MENTAL the solution to the frame problem is enormously simple and is based on:
Consider the environment (the abstract space where expressions reside) as persistent memory. Expressions persist unless they are explicitly modified. It is to apply the law of inertia of common sense.
Use generic expressions (parameterized or not), because generic expressions are active (evaluated) at all times. With generic expressions, you can specify procedures and rules
The simple example mentioned above, of an object a with attributes of color and position, is expressed as follows:
When the value of the color attribute is changed, the object automatically changes color, since this has been specified in the generic expression, but the position attribute does not change. The same happens when the position attribute is changed; the color attribute does not change.
MENTAL allows to address the framework problem and also to address virtually all related problems, as it overcomes the limitations of traditional AI languages (Lisp and Prolog), providing flexible and powerful linguistic resources to represent knowledge in a changing environment where there may be even different interacting agents. Some of the available resources are:
Specification of all kinds of expressions (functions, rules, objects, attributes, etc.).
Actually all these topics are applications of MENTAL and all are related through universal semantic primitives.
Addendum
The Yale shooting problem (The Yale shooting problem)
The name of this problem comes from the fact that it was proposed by Steve Hanks and Drew McDermott [1987], from Yale University. This problem refers to situations where classical logic is insufficient. It is actually an example of non-monotonic temporal reasoning. The example proposed by Hanks and McDermott is as follows:
There is a sequence of situations. Initially Fred (a turkey) is alive and there is an unloaded gun. In the next situation the gun is loaded and Fred is still alive. In the next situation, the gun is fired on Fred, and Fred is assumed to be dead and the gun unloaded.
In this example, there are two fluents (time-varying conditions): Fred's dead or alive state. And there are two actions: loading the gun and firing (unloading) it.
In normal logic, Fred is expected to be dead. However, there may be circumstances that cause Fred to survive: the gun malfunctions, the shot goes wide, Fred has moved, etc.).
This problem is solved in many ways: including more variables or predicates, including new conditions, considering higher-order conditions, with time-varying conditions, etc.
The Yale shooting problem is often presented as an illustrative example of a new form of reasoning: non-monotonic temporal reasoning.
Bibliography
Dennett, Daniel C. Cognitive Wheels: The Frame Problem of Artificial Intelligence. En [Pylyshyn, 1987]. Disponible en Internet. Versión en español: Ruedas cognitivas: el problema del marco de la inteligencia artificial. En Pérez Miranda, Luis Ángel (coord.) “Lecturas filosóficas de ciencia cognitiva”, pp. 317-348, 2007.
Dennett, Daniel C. Brainstorms. MIT Press, 1978.
Ford, K.; Pylyshyn, Z. (eds.). The Robot Dilemma Revisited. Ablex Publ., Norwood, NJ, 1996.
Hanks, S.; McDermott, D. Default reasoning, nonmonotonic logic, and the frame problem. Proceedings of the American Association for Artificial Intelligence: 328-333, 1986.
Hanks, S.; McDermott, D. Nonmonotonic logic and temporal prjection. Artificial Intelligence 33(3): 379-412, 1987.
Hayes, Patrick J. What the frame problem is and isn´t. En [Pylyshyn, 1987], pp. 123-137, 1987.
McCarthy, John; Hayes, Patrick J. Some problems from the standpoint of Artificial Intelligence. En Machine Intelligence 4, ed. D. Michie and B. Meltzer, Edinburg: Edinburgh University Press, pp. 463-502, 1969. Disponible en Internet.
McCarthy, John. Circumscription. A form of non-monotonic reasoning. Artificial Intelligence 13: 27–39, April 1980.
McCarthy, John. Applications of circumscription to formalizing common-sense knowledge. Artificial Intelligence 28 (1): 89–116, February 1986.
Morgenstern, Leora. The problem with solutions to the frame problem. En [Ford & Pylyshyn, 1996], pp. 99-133. Disponible en Internet.
Pylyshyn, Zenon. W. (editor). The Robot´s Dilemma: The Frame Problem in Artificial Intelligence. Ablex Publ., Norwood, NJ, 1987.
Shaham, Y. Reasoning about Change. Cambridge: MIT Press, 1988.
Shanahan, Murray. Solving the Frame Problem: A Mathematical Investigation of the Common Sense Law of Inertia. MIT Press, 1997.
Silenzi, María Inés. El problema de marco: la formalización de sistemas dinámicos en agentes artificiales. Revista Iberoamericana de Argumentación, vol. 3, pp. 1-20, 2011. Disponible en Internet.
Silenzi, María Inés. El problema de marco considerado desde una perspectiva cognitiva. Internet.
Silenzi, María Inés. El problema del marco como nudo teórico en la interfaz entre la filosofía y las ciencias. Docencia y Tecnología, vol. 45, pp. 81-102, Nov. 2012. Disponible en Internet.