MENTAL
 Main Menu
 Applications
 Artificial Intelligence
 The Frame Problem


The Frame Problem
 THE FRAME
PROBLEM

One of the central themes in Artificial Intelligence, Cognitive Science and Philosophy

"A new, deep epistemological problem, accessible in principle, but unnoticed by generations of philosophers" (Daniel Dennett).



Origin of the problem

The frame problem is a significant example of the limitations of the logical paradigm of Artificial Intelligence (AI), the so-called "logicist AI", a branch of AI that attempts to formalize human reasoning (including common sense reasoning) by applying exclusively mathematical logic.

The problem was first raised by John McCarthy and Patrick Hayes [1969] in their famous paper "Some Problems from the Standpoint of Artificial Intelligence".

McCarthy and Hayes posed the problem as the difficulty of expressing with first-order predicate logic that which does not change (that which persists) in a dynamic domain as a consequence of actions taking place in that environment. To solve this problem they developed the so-called "situation calculus", an application of first-order predicate logic that made it possible to represent cause-effects and to reason taking time into account (temporal reasoning). In the calculus of situations appear attributes of objects that do not change (persist) and others that change (are dynamic) as a consequence of actions.

Three concepts appear in the situation calculus: A simple example, illustrating the spirit of the frame problem, is the following: This example, which at the human level is very simple and common sense, is not possible to express by classical predicate logic.

To solve the problem, McCarthty and Hayes proposed "frame axioms" to specify things that do not change in a dynamical domain. For example: "With the action of moving, the color does not change". Without such axioms, a system is not able to deduce which attributes persist.

But it happens that a large number of frame axioms may be needed to specify the attributes that do not change under different actions. Indeed, a frame axiom would be needed for each action and attribute that does not change, which would be tedious, inelegant and inefficient. Moreover, it may happen that, under concurrent actions, some axioms are false.

The frame problem consists in trying to find a synthetic way to specify attributes that do not change in a dynamic environment, without having to specify frame axioms.

One way to simplify the problem is to apply what is called "the common sense law of inertia": a property does not change (persists) unless otherwise specified in a frame axiom. But this general assumption has drawbacks: The term "framing problem" derives from the technique used in cartoons (framing), where there is a fixed scene and on which the animated parts are superimposed. In this context, the actions specify the things that change, while the rest (the frame) remains unchanged.


Solution approaches to the frame problem

Many solutions exist today to formalize the frame problem. Most of them are expressed in an AI language, such as Lisp or Prolog. Two in particular stand out: Other solution approaches are based on using heuristic rules, using causal connections, using frames (Minsky), scripts (Schank), etc.


The various conceptions of the frame problem

The frame problem as presented by McCarthy and Hayes was too restricted. Today, the frame problem has been generalized, it is given a broader interpretation in AI. Anyway, there are different opinions and interpretations about it: The frame problem is also considered from the philosophical point of view, specifically from epistemology. It is what is called the "epistemological frame problem". The first philosopher to refer to this problem was Daniel Dennett [1978], who managed to popularize it.

It is also considered from the point of view of cognitive science, the science that deals with how information is represented in the mind/brain and transformed into knowledge.

Actually, the frame problem, whether from the AI, philosophical or cognitive point of view, is the problem of meaning or consciousness. It is a problem whose paradigmatic example is John Searle's Chinese box metaphor. In this sense the framework can be identified with consciousness or with common sense.

For example, if we have a robot acting in a dynamic environment, the frame problem is in this case the problem of adaptation of the robot to the environment. The behavior of the robot at any given time is determined by the knowledge base (or model of the world) and the input from the environment (the modification of the environment). The robot must ignore irrelevant changes (with respect to the immediate previous situation) and consider only the significant ones directly related to its task or goal.

To do this, the robot should "understand" or be aware at a general or global level of what is happening at all times, of each new situation, and distinguish between relevant (or useful) information and that which is not. He must also be aware of the consequences of his actions. To do so, he should distinguish between superficial (or particular) meaning and deep (or general) meaning. Actually, the problem of the framework is the creation of a computational model that mimics human behavior, including common sense reasoning.


The implications and connections of the frame problem

The frame problem is not an isolated problem in AI. It is linked to all major issues in AI:
MENTAL's Solution to the Frame Problem

In MENTAL the solution to the frame problem is enormously simple and is based on: The simple example mentioned above, of an object a with attributes of color and position, is expressed as follows: When the value of the color attribute is changed, the object automatically changes color, since this has been specified in the generic expression, but the position attribute does not change. The same happens when the position attribute is changed; the color attribute does not change.

MENTAL allows to address the framework problem and also to address virtually all related problems, as it overcomes the limitations of traditional AI languages (Lisp and Prolog), providing flexible and powerful linguistic resources to represent knowledge in a changing environment where there may be even different interacting agents. Some of the available resources are: Actually all these topics are applications of MENTAL and all are related through universal semantic primitives.



Addendum

The Yale shooting problem (The Yale shooting problem)

The name of this problem comes from the fact that it was proposed by Steve Hanks and Drew McDermott [1987], from Yale University. This problem refers to situations where classical logic is insufficient. It is actually an example of non-monotonic temporal reasoning. The example proposed by Hanks and McDermott is as follows: In this example, there are two fluents (time-varying conditions): Fred's dead or alive state. And there are two actions: loading the gun and firing (unloading) it.

In normal logic, Fred is expected to be dead. However, there may be circumstances that cause Fred to survive: the gun malfunctions, the shot goes wide, Fred has moved, etc.).

This problem is solved in many ways: including more variables or predicates, including new conditions, considering higher-order conditions, with time-varying conditions, etc.

The Yale shooting problem is often presented as an illustrative example of a new form of reasoning: non-monotonic temporal reasoning.


Bibliography