"My mind and the world are composed of the same elements" (Erwin Schrödinger).
"The order of the world and the order of the mind is the same" (Spinoza).
"Language is an isomorphic representation or model of the world" (Wittgenstein).
"Thought is a system" (David Bohm).
The Mental Models
First, we must distinguish between theories of mind and mental models. Theories of mind try to explain what the mind is. Mental models attempt to explain how the mind works.
Several theories of mind have been put forward in the history of philosophy and scientific thought, including:
There is only one nature that manifests itself as mind and matter. This is the monistic theory.
Mind and matter are of different natures. It is the dualistic theory.
Mind is an epiphenomenon of the brain. Mental activity is a consequence of the immense number of interconnections between the neurons of the brain. This is the physicalist theory.
The mind is only a reflection of the external world, an abstraction of perception. It is the theory of the empiricists.
The mind is the same as the brain. It is the theory of identity.
Mind is the software of the brain. There is no fundamental difference between natural intelligence and artificial intelligence. It is computational or computational theory.
Mind does not exist. The mind is a system of meaning structures through which a person is in contact with the physical and social world.
The expression "mental model" was first used by Kenneth Craik [1967] in his 1943 book "The Nature of Explanation". This Scottish psychologist claimed that the mind builds "small-scale" models of reality, in order to explain it, reason, anticipate situations, etc.
Before Craik, Georges-Henri Luquet [1978] developed the idea of the mental model in his 1927 book "Le dessin enfantin" (Children's Drawing), in which he argued that children construct "internal models" of reality.
For the first Wittgenstein −that of the Tractatus, a work published in 1922−, the mind elaborates linguistic structures that reflect the structure of reality. There is correspondence between the structure of internal (mental) reality, external reality and the logical structure of language. Therefore, the model of the mind is the logical structure of language.
Cognitive Science appeared in 1956 and integrated different disciplines with the aim of joining efforts to try to solve the problem of the mind: what it is, how it works, what cognition is, how we construct new ideas, how we make decisions, how we interact with the environment, etc. Cognitive Science integrates 6 sciences (the so-called "cognitive hexagon"): neuroscience, AI (artificial intelligence), psychology, linguistics, anthropology and philosophy. It is assumed that there is a correspondence relationship between the internal and the external, between internal representations and objective reality, so one of the most important problems of Cognitive Science is the question of the internal representation of external reality.
In the 1980s two works were published with the same title "Mental Models" [Johnson-Laird, 1986] [Gentner & Stevens, 1983], but with different orientations.
The problem with mental models is that they cannot be verified empirically (objectively), since this is an impossible task as they belong to the internal (subjective) world. But with the advent of the computer and AI, things became a little easier, since they made it possible to model something similar to mental processes (reasoning, learning, etc.) and to experiment with them objectively in order to draw conclusions and approach the knowledge of the mind. Before AI, cybernetics also detected remarkable analogies between control systems and certain aspects of the functioning of the human mind.
Different models of the human mind have been proposed throughout history. Each of these models uses a different paradigm. So far, a "universal model" of the mind, based on a universal paradigm capable of contemplating the different particular paradigms, has not appeared.
The computational model
With the appearance of the computer −a realization of the human mind−, at once the metaphor "the mind is the software of a computer and the brain is the hardware" emerged. In fact, computers are referred to as "electronic brains". The computational model of the mind (or computationalism) then appeared: mental processes are computations.
This metaphor was very justified, there being many analogies between the computer and the mind:
Hardware. It corresponds physically to the brain.
Software. The mind is the software of the brain. Software is flexible, capable of adopting all kinds of contents.
Instructions. They are the principles or primitives of the mind. They correspond to the intermediary elements between hardware and software.
Memory. Computer memory corresponds to brain memory.
Instruction set. It is the lexicon of the mind (lexical semantics).
Machine language. Language of the mind (structural semantics).
Process. Data or information processing corresponds to mental processes.
Symbolic representations. They correspond to mental representations.
Logic. The Boolean logic used by the computer is the logic of human reasoning. This idea has its philosophical roots mainly in Hobbes ("reasoning is nothing but calculation") and Leibniz (who tried to create a logical calculus of human reasoning).
Input/Output. The input and output devices correspond, respectively, to the sensory (input) organs and to the (output) responses to the outside.
Computers are computational systems. Their technology is inspired by the "Universal Turing Machine" (UTM), a simple rule-theoretic device, with an unlimited linear memory (a tape of discrete cells), with internal states and a device for reading/writing symbols into memory and with the ability to move one cell at a time on the tape, left or right. The MTU can emulate any particular Turing machine (MT) because it includes in memory its corresponding code. As the MTU represents the concept of computation in a very simple way, the first computational model of the mind was the MTU.
But the problem with the computer metaphor, the main issue, is semantics. Computers are only symbol manipulators, they do not handle meanings; they are formal, syntactic, superficial instruments. This limitation of computers is reflected in John Searle's famous metaphor of the "Chinese box". For John Searle, the most important aspects of consciousness (intentionality, semantics and subjective quality) can never be programmed.
MTU represents the essence of computation in a very simple way, but it is very superficial. The MTU obviously cannot be a model of the mind for many reasons. The main one is that it is a pure formal system, which does not handle meanings. But there are others: it uses linear structure memory; it handles data (not information, much less knowledge); the rules are detailed, particular, they do not handle generic information; and the memory access and modification system is very limited.
The modular model
Jerry Fodor −one of the most prominent researchers on the subject of mental models and the philosophy of mind− in his book "The Modularity of Mind" [1986], defends the modular model of the mind: the mind is divided into modules, each one performing a specific function, and with connections between them. This model is framed within the computational thesis or metaphor: the mind is a general-purpose computer that processes information.
Several issues arise in this model:
Granularity, i.e., the size or magnitude of these modules. At the low end, a module may be a single component. At the upper end, a module can be a complete system. These two extremes correspond, respectively, to the weak and strong versions of modularity. A module in the strong version is also called a "Fodor module", since this author advocates the systemic conception of mental modules.
The structure or relationships between modules. Modules can form structures in many ways, the two most important being network and hierarchical. The latter is that possessed by physical systems (cells, tissues, organs, etc.).
The type of communication between modules. The most common conception is to assume that a module receives an input and produces an output. The inputs/outputs can also be with the outside (vision, hearing, etc.). The modules are assumed to be connected in a fixed way.
The degree of independence of each module. It is normal to assume that each module is isolated from the others, i.e. encapsulated with its own information, with internal procedures, and inaccessible from other modules. As the modules are independent, they can operate in parallel.
The relationship between mental modules and neural structures of the brain.
According to Fodor's model, there are two types of mental faculties:
Verticals, which are domain-specific and are implemented as modules. A mental module is a domain-specific information processing system (e.g., color, shape, spatial relations, etc.), extremely fast processing, encapsulated (inaccessible from other modules), associated with particular neural structures, innate, with restricted inputs (e.g., the visual system is restricted to visual signals) and producing superficial, formal, non-conceptual outputs. Examples of modular systems are perceptual processes (the visual system, auditory, etc.) and linguistic ones.
Horizontal ones, which are generic, i.e., independent of the contents to which they are applied. They are implemented as non-modular systems, they are central, global and holistic. These systems elaborate concepts, fix beliefs and perform object recognition. For example, attention, memory and thinking.
The functional (or behaviorist) model
According to the functional model of the mind, all mental processes are functions. These functions mediate between sensory inputs and motor outputs, and can be described in the form of algorithms. It does not matter how the mind is organized internally. The mind is a black box. What matters is the behavior and the algorithms with which the mental functions are identified.
The functional model of the mind distinguishes between the structural (the neuronal organization) and the functional (the functions of the brain). The functional is independent of its form of physical implementation; they can be carried out by natural or artificial systems.
Mental functionalism was first formulated by Hilary Putnam in 1960, who is considered the "father" of mental functionalism [Putnam, 1960], although he is now one of the major critics of functionalism [Putnam, 1990]. Other proponents of this model are Fodor and Johnson-Laird. For Fodor, mental modules are of a functional type. For Johnson-Laird, psychology is reduced to the study of functional programs, which are independent of neurophysiology.
The problem with functionalism is that it cannot explain or model qualia, the subjective qualities of individual experiences.
The symbolic model]
According to the symbolic model, also inspired by the computer metaphor, the mind is a symbol-manipulating computer:
Concepts are symbols.
Mental contents are symbol structures.
Mental language is a symbolic language.
The individual interacts with the environment through symbolic internal representations.
Symbols are arbitrary. They bear no relation of similarity to their referents.
Symbols are combined according to certain syntactic patterns.
Symbols are amodal, i.e., independent of the sensory modality of the information. For example, the mental symbol corresponding to "table" is the same if I see a table, if I hear the word "table", if I see it written, and so on.
Allen Newell and Herbert Simon hypothesized the mind as a "system of physical symbols" [Newell, 1994] [see Comparisons - MENTAL vs. IPS].
The problems of the symbolic model are:
No computer implementation of this model has been done so far.
Individual symbols have no meaning (conceptual reference). Meaning is reduced to the relationships between symbols.
Symbolist theories are not falsifiable.
The connectionist model
The connectionist model also called "subsymbolic" is an alternative to the simplistic symbolic model. It is a model that combines modularity and functionalism. According to this model, the mind is a network of connections, which are a reflection of the network of neural connections. This network is distributed and dynamic. Representatives of the connectionist school are Patricia Churchland and Terrence Sejnowski [1994].
The best known connectionist model is the neuronal model based on artificial neurons by Walter Pitts and Warren McCulloch [1943]. Artificial neurons are simple processors, with connections between them (the degree or strength of a connection is determined by weights or values), with parallel distributed processes and learning rules. The network "learns" through a series of selected cases.
This model is based on the way the nervous system of living organisms operates. The normal computational process is local, linear and causal. Network processing is global, nonlinear and recursive. The connections between neurons are represented in a logical language. A neuron, when activated, activates others, thus establishing an analogy between a proposition and its logical inferences.
A basic problem with this model is its difficulty in performing recursive operations, operations that are representative of human cognition.
The linguistic (or representational) model
According to this model initially postulated by Jerry Fodor [1985] and seconded by Steven Pinker [2009] the mind has a language of its own: the language of thought: mentalese (Latin word that translates as "mental" in English). Fodor justifies it by analogy: as external (spoken) language is systematic and expresses thoughts, then thoughts must also be systematic and must also be governed by an internal language. The language hypothesis of thought is based on the internal representation of mental contents and computationalism.
The characteristics of the hypothetical language of thought are:
It is something real, innate, that is encoded in the brain, and is not just an explanatory tool.
It has its own syntax, based on "tokens", which correspond to simple concepts. The syntax plays a mediating role between the power of symbols and their semantic content.
By means of combinatorial logical rules, more complex concepts or thoughts are formed.
Thinking consists of computational operations on the internal representations of the language of thought.
Other models
The identity model [1956].
Formulated by John Smart and Ullin Place in 1956, it states that a mental state is identical to a neural state.
"The Society of Mind," by Marvin Minsky [1986].
According to Minsky, the mind is made up of many small processes called "agents". Each of these agents individually only perform simple tasks, and do not possess a mind. However, the gathering of these agents in societies leads to mental processes. Intelligence may be the result of a hierarchy of agents.
The instinctive model, by Steven Pinker [2009].
Pinker is a cognitive scientist, advocate of evolutionary psychology and the representational theory of mind. According to this author, mind and language are "instincts", biological adaptations resulting from natural selection.
The Dynamics of Systems, by Jay Forrester [1961].
Jay Forrester is the "father" of System Dynamics and his book "Industrial Dynamics" [1961], is considered the starting point of it. This discipline is of a general type, since it studies all kinds of complex dynamic systems (social, civil and industrial engineering, cities, etc.) by means of simulation, with the help of computer models.
Forrester established a parallelism between all dynamic systems and a hydrodynamic type system, constituted by reservoirs intercommunicated by channels, with delays and feedback loops, varying by means of flows the levels of these reservoirs, with the help of exogenous elements (valves).
Since System Dynamics is a generic theory of complex systems, it has been suggested that it could also serve as a formal mathematical model of the complexity of the mind.
Howard Gardner's model of multiple intelligences [2003].
It is actually not a complete model of the mind, but only of intelligence. It is a model proposed in 1983 by Gardner in which he postulates that intelligence is not something unitary, but is constituted by multiple distinct and independent intelligences. Intelligence is both innate and acquired. Gardner defines intelligence as the "ability to solve problems or produce products that are valuable in one or more cultures", and distinguishes eight different types of intelligence: linguistic-verbal, logical-mathematical, spatial, musical, bodily-kinesthetic, intrapersonal, interpersonal and naturalistic.
Mind as a system, by Gregory Bateson [1972, 2010].
Bateson was a universalist who sought a synthetic theory or foundation of internal (mental) and external (physical) reality through the integration and harmonization of all knowledge. Influenced by Cybernetics and General Systems Theory, he considered that:
Reality is a system of hierarchical and interdependent components in which feedback plays a preponderant role. The components communicate (or express themselves) through differences, which are ideas. These communication relationships are what is really important and not the components themselves. The difference must be considered the unit of information.
The mind is a system: an aggregate of interacting parts or components. The mind is a cybernetic system because it processes information and has feedback. Mental processing is based on feedback. The interaction between the parts of the mind is triggered by difference. The effects of difference are transformational processes. The classification of these transformation processes reveal a logical hierarchy in all mental phenomena.
There is a global ecosystem and a global, immanent Mind in which all other minds are subsumed. This global Mind is comparable to God.
Daniel Dennett's model [2009].
Dennett is an advocate of the computational model of the mind, but separates mental content and consciousness. Consciousness is a virtual machine that relies on generic or high-level algorithms and acts as a control mechanism. Consciousness is high-level abstract thought. Consciousness is a phenomenon derived or emerging from the mind by a process of generalization, of natural selection, a product of evolution. Consciousness is the algorithm and the mind the information processor.
]
Dennett's theory is consistent with strong AI (machines can emulate the human mind).
MENTAL, a Theory and Model of Mind
The Primary Concepts
The process of thinking is seemingly complex. But throughout history, several authors have raised the possibility that such complexity is the result of the combination of a basic set of concepts. The identification of these concepts and their combinatory mechanisms would constitute the so-called "language of thought", with which a universal and perfect language could be constructed [Eco, 1993].
"The psychological community suffers from a severe case of physical envy. Everyone has been searching for a minimal set of basic principles of psychology, a very small collection of incredibly powerful ideas that alone can explain how the mind works" (Marvin Minsky).
The search for these concepts is based on two possible movements of consciousness:
Descending (or particularistic).
Through a process of successive refinement, try to arrive at a series of primitive or atomic concepts. The rest of the concepts would result from the combination of those primitive concepts, through a constructive process of ascending sense.
Ascending (or generalist).
By a process of increasing abstraction, arrive at a series of generic, high-level concepts, philosophical categories or general principles. All other concepts would be particularizations or projections of these generic concepts, in a process of descending sense.
Regarding the first case, there is no evidence, neither linguistic nor psychological, of the existence of a set of atomic, indivisible concepts from which all the others can be constructed. On the other hand, in the opposite sense, that of abstraction and generality, it is possible to identify increasingly generic concepts that encompass other more particular ones. This is justified for three reasons:
First, because there is a hierarchy of concepts, with varying degrees of generality, so that the intent of these concepts is increasingly greater until they cover the whole of reality.
Secondly, because of the natural tendency towards a cognitive economy, aimed at obtaining the maximum amount of information using the minimum number of cognitive resources.
Third, because, once generic concepts are established, they can be combined to create lower level concepts, e.g., the concept "man" is a concept derived from "animal" and "rational".
The archetypes of consciousness
A model of a system is a general conceptual scheme of its structure and functioning, capable of explaining all its particular behaviors. But trying to create a model of the mind with the help of the mind is an impossible task because it is paradoxical and self-referential. We are faced with a situation analogous to the problem of the formalization of semantics and consciousness.
In order to realize a model of the mind, it is necessary to place oneself on a higher plane than the mind, which is consciousness, for as a faculty of the soul it is above the mind, the latter being an instrument of consciousness. The mind is not in the brain nor is it an epiphenomenon of the brain. Mind is in a higher dimension than the brain.
In this sense, the archetypes of consciousness, in themselves constitute the model of mind because they constitute the supreme abstraction. The primary archetypes are the foundation of the possible worlds, and are common to the internal (mental) world and to the external (physical) world.
In MENTAL, theory and model coincide, since it integrates what the mind is and how the mind works. It integrates these two complementary concepts, i.e., the synthetic and the analytical.
The human mind is complex, but we know that it is semantic in nature, that is, it is based on a structure of concepts to apprehend reality, concepts ranging from the simplest to the most complex, which must be combinations of the simple ones.
MENTAL is a simple language (although capable of representing complexity) and has a semantic foundation. From these two statements it seems logical and natural to present MENTAL as an approach to a supposed "language of the mind". And we say "language of the mind" or "mental language" (this time the word "mental" without capital letters) and not "language of thought", since the former is more generic than the latter, as it covers state or representational (memory) and process (or computational) aspects.
Characteristics of MENTAL, as a model of the mind]
Abstract and universal concepts.
MENTAL is based on a set of abstract and universal concepts that are the primary archetypes, the archetypes of consciousness. These primary archetypes form a language, since they can be freely combined.
Inner world and outer world share the same principles, the same roots: the primary archetypes or archetypes of consciousness. These archetypes are reflected in language. The mind is connected to consciousness through the primary archetypes.
Simplicity.
Nature acts according to the principle of economy. Therefore, the model of the mind has to be as simple as possible and based on as few primitive concepts as possible. The relationships between these concepts must also be as simple as possible. The maximum simplicity is achieved when these relations are provided by the primitive concepts themselves, i.e., when in the language the structural semantics is equal to the lexical semantics.
Supreme abstraction.
The MENTAL model transcends any particular situation, it rises to the highest possible abstraction that grounds all mental content.
No-localidad.
Mind is a system in which all contents are non-local; everything is connected with everything. In mind there is no physical space; there is mental space in which contents are interrelated.
In MENTAL space is abstract, which is both local and non-local. The abstract space in which the concrete contents are stored: the expressions (which are interrelated), which correspond to the mental contents.
The abstract space is the space in which all kinds of interrelated expressions "live": sequences, sets, functions, procedures, rules, objects, etc.
Language of consciousness.
Consciousness and mental model go hand in hand. The model of the mind is based on the model of consciousness. Consciousness is manifested in the structure of mental contents. Consciousness manifested as structure is the support of all mental contents. There cannot be content without structure. The primary archetypes of the consciousness are "forms without content", as Jung said. The consciousness unites all pairs of opposites and integrates them into a coherent unity.
The mental model is the MENTAL language itself. The MENTAL model makes it possible to relate the pairs of opposites: the descriptive and the operative, the static and the dynamic, the general and the particular, and so on. It also connects the deep level (of primary archetypes) and the superficial level of expressions.
AI model.
The MENTAL model is a model and language for AI, such that it facilitates and simplifies the development of AI applications, including knowledge representation and management. MENTAL assumes the maximum possible AI.
Unification and universality.
With MENTAL the boundaries between AI, mind model, language and consciousness are blurred. The mental model reflects the structure of reality, but also transcends reality itself. MENTAL is the Magna Carta of possible worlds.
Meta-model.
MENTAL integrates all the models of the mind mentioned above. It does not rival other possible models, but includes them as particular cases. It is analogous to programming paradigms in computer science by providing a unifying paradigm. Actually, more than a model, MENTAL is a meta-model of the mind, since it allows the construction of models based on particular mental paradigms.
MENTAL is not, then, a concrete model of the mind, for if it were, it would be a limited model, because being a concrete model implies being superficial, and the mind is of a deep, universal, semantic nature and cannot externalize (or manifest) itself as a model.
The mental model works at an internal level, so it cannot be representable, expressible at an external level, because it would be a contradiction. The mind cannot be captured with a determined scheme, but with principles or conceptual primitives, which are inexpressible. What is possible is the representation of concrete contents, which are manifestations of the internal.
The mind-computer analogy.
With MENTAL, the analogy between mind and computer becomes clearer. If we consider MENTAL as a computer operating system and as an application language, we have that the mind works as a computer that has 12 pairs of "instructions" (the 12 primitives and their opposites), which not only serve to model processes but also to describe cognitive structures.
Particularly important is the issue of the union between syntax and semantics: every syntactic, formal expression carries an associated semantics of supreme level of abstraction. This aspect is key to the mind-computer linkage. Since computers are only symbol manipulators, they have no semantics, the maximum possible approach to semantics is through the use of semantic primitives or primary archetypes.
Union of thought, language and consciousness.
Language and thought are linked together as opposites: language (external) and thought (internal) is a dialectical interrelation, as the Russian psychologist Vigotsky states:
Thought is not only expressed in words, but also comes into existence through them. Structures of speech become structures of thought.
Consciousness is that which links thought and language.
Thought is linguistic by nature, and language (external) is the instrument of thought (internal language).
Human memory relies primarily on language, has linguistic structure, and language relies on memory.
Degrees of freedom.
The mind does not have a concrete model. The mind has degrees of freedom or dimensions with which contents and mental models are constructed. The mind does not have a concrete language either. With the degrees of freedom particular languages and paradigms are constructed. This is also the metamodel and metalanguage of MENTAL.
Bibliography
Boden, Margaret A. Computer Models of Mind: Computational approaches in theoretical psychology. Cambridge University Press, 1988.
Chomsky, Noam. Crítica de Verbal Behaviours de B.F. Skinner. En ¿Chomsky o Skinner? La génesis del lenguaje. Fontanella, 1977.
Churchland, Paul M. Materia y conciencia. Introducción contemporánea a la filosofía de la mente. Gedisa, 1992.
Churchland, Patricia Smith; Sejnowski, Terrence J. The Computational Brain (Computational Neuroscience). A Bradford Book, 1994.
Churchland, Patricia Smith. Neurophilosophy. Towards a Unified Science of the Mind-Brain. A Bradford Book, 1989.
Craik, K.J.W. The Nature of Explanation. Cambridge University Press, 1967.
Bateson, Gregory. Una unidad sagrada: pasos ulteriores hacia una ecología de la mente. Gedisa, 2010.
Bateson, Gregory. Pasos hacia una ecología de la mente: colección de ensayos en antropología, psiquiatría, evolución y epistemología. Ballantine Books, 1972
De Vega, Manuel. Introducción a la Psicología Cognitiva. Alianza, 2006.
Dennet, Daniel. Contenido y conciencia. Gedisa, 2009.
Eco, Umberto. La búsqueda de la lengua perfecta. Editorial Crítica, 1993.
Fodor, Jerry A. El lenguaje del pensamiento. Alianza, 1985.
Fodor, Jerry A. Conceptos. Donde la ciencia cognitiva se equivocó. Gedisa, 1999.
Fodor, Jerry A. El olmo y el experto. El reino de la mente y su semántica. Paidós Ibérica, 1996.
Fodor, Jerry A. La mente no funciona así. Alcance y límites de la psicología computacional. Siglo XXI de España Editores, 2003.
Fodor, Jerry A. La modularidad de la mente. Ediciones Morata, 1986.
Fodor, Jerry A. Psicosemántica. El problema del significado en la filosofía de la mente. Editorial Tecnos, 1994.
Forrester, Jay Wright. Industrial Dynamics. Pegasus Communications, 1961.
Gardner, Howard. La nueva ciencia de la mente. Historia de la revolución cognitiva. Paidós, 2011.
Gardner, Howard. La inteligencia reformulada. Las inteligencias múltiples en el siglo XXI. Paidós, 2003.
Gentner, Dedre; Stevens, Albert. Mental Models. Lawrence Erlbaum Associates, 1983.
Jackendoff, Ray. La conciencia y la mente computacional. Visor, 1998.
Johnson-Laird, Philip N. Mental Models. Towards a Cognitive Science of Language, Inference and Consciousness. Harvard University Press, 1986.
Johnson-Laird, Philip N. El ordenador y la mente. Introducción a la ciencia cognitiva. Paidós, 1990.
Luquet, Georges-Henri. El dibujo infantil. Editorial Médica y Técnica, 1978.
McCulloch, Warren.; Pitts, Walter. A logical calculus of the ideas immanent in neurons activity. Bulletin of Mathematical Biophysics 5: 115-133, 1943.
Minsky, Marvin. La sociedad de la mente. Ediciones Galápago, Argentina, 1986.
Newell, Allen. Unified Theories of Cognition. Harvard University Press, 1994.
Nightingale, Andrea; Sedley, David (editores). Ancient Models of Mind: Studies in Human and Divine Rationality. Cambridge University Press, 2010.
Pinker, Steven. El instinto del lenguaje. Alianza Editorial, 2009.
Pylyshyn, Zenon W. Computación y Conocimiento. Editorial Debate, 1988.
Pinker, Steven. El instinto del lenguaje. Cómo la mente construye el lenguaje. Alianza, 2012.
Pinker, Steven. Cómo funciona la mente. Destino, 2004.
Putnam, Hilary. Minds and Machines, Hook. Dimensions of Mind. Collier Books, 1960.
Putnam, Hilary. Representación y realidad. Un balance crítico del funcionalismo. Gedisa, 1990.
SEP (Stanford Encyclopedia of Philosophy). The Language of Thought Hypotesis. Internet.