"Mathematics is interested in infinite systems, while computer science is not."
(Donald Knuth)
"Mathematics deals with theorems and static relations, while Computer Science deals with algorithms and dynamic relations."
(Donald Knuth)
The Nature of Computer Science
Computer science was born from the convergence of several disciplines:
Mathematics, which provided the theoretical foundations.
Physics, which provided the electronic circuits.
Linguistics, which helped establish the syntactic structure (the formal grammar) of programming languages. Although in the early days of programming a formal grammar was not specified, today it is indispensable.
The development of computers was the culmination of several key ideas:
The binary code, which dates back to the ancient Chinese wisdom of the I Ching and its discovery by Leibniz.
Boolean algebra, presented in 1854 in "The Laws of Thought", based on simple logical laws about binary values.
The introduction by Alan Turing of an abstract machine, of very simple operation −what we call today "Turing machine"− which allowed the implementation of specific calculations by means of a program (set of instructions) and some input data. This machine made it possible to define the concepts of computation and algorithm.
Turing also devised a universal machine, in which the program resided internally in the machine's memory, and which made it possible to perform all types of calculations. This key and fundamental idea laid the theoretical foundations for the subsequent development of the computer, a "general purpose" or universal machine capable of performing all kinds of processes. In short, it made hardware flexible.
The theory of information and communication, created by Claude Shannon. Shannon demonstrated that Boolean algebra could be implemented by electronic circuits.
John von Neumann was the creator of the computer architecture that bears his name (von Neumann architecture), an architecture based on Turing's concept of a program stored in memory so that the machine would be universal, that is, so that it would be able to perform any calculation. Von Neumann was the creator of the first general-purpose digital electronic computer (the ENIAC).
But we cannot forget Ada Lovelace was the first person to write a computer program, taking as a basis the functional design of Babbage's analytical machine.
Ada advanced several important concepts, which would become reality years later. She invented the concept of the universal machine to perform numerical computations and symbolic processes. She also invented the concept of program or algorithm, with three structures: subroutines, loop and conditional branching. In short, Ada made the conceptual leap from the calculating machine to the general-purpose computer. The computer as we think of it today (a universal device) was imagined by Ada and later developed by Turing and Von Neumann.
A distinction must be made between theoretical computer science and practical computer science. Theoretical computer science was born with Boole, Ada Lovelace, Turing and Shannon. Practical computer science was born with von Neumann. Turing and Shannon also acted as bridges between theoretical and practical computer science.
Despite all this known background, there is debate about the true nature of computer science. This is justifiable, for if there are doubts about the nature of mathematics, it would seem logical that there should also be doubts about the nature of computer science, which has largely inherited from mathematics. There are different opinions on this subject, among them the following:
The science of the theoretical and practical principles of computers.
The science of computation.
The science of algorithms.
Finite or discrete mathematics.
Practical or experimental mathematics.
A branch of engineering.
A branch of linguistics.
Programming Languages
The software crisis. The crisis of programming languages
In 1968, during the NATO Software Engineering Conference, the term "software crisis" was coined to indicate that software was difficult to create, maintain, plan, control, inflexible, unreliable and unusable. Many years have passed since then, but the problem remains, as no fully satisfactory solution has been found.
An essential aspect of the development of computer applications is the programming language, which continues to be really frustrating, mainly because of the thinking model it imposes. We must really speak of the crisis of language, rather than of software, a crisis that will not be overcome until we have a standard, universal, powerful, flexible, high-level, creative, easy-to-learn and easy-to-use language.
Since the origins of computing, many programming languages have been invented and are still being invented. And although there are many reasons for doing so, such as the application of different points of view, paradigms, models, etc., the main underlying reason is dissatisfaction. A programming language has not yet been found that fully fits, that fits our model of thinking, so that there is perfect correspondence between our internal semantic resources and the concepts provided by the programming language.
There have been, throughout the relatively short history of programming languages, a number of milestones that have been important breakthroughs by providing new ideas that have subsequently been adopted by other languages. Among these languages are Pascal, Algol 68, Simula 67, Lisp, AWK, Snobol and Prolog. In recent years there has been a revival of interest in the design of new programming languages. As a result of this interest, new languages such as CLU, Euclid, Mesa, etc. have emerged.
Today, new general-purpose programming languages continue to be invented that try to overcome the limitations of the current ones, adopting new paradigms or integrating existing paradigms, with the aim of providing greater generality, expressiveness and flexibility.
There are many alternatives to improve this situation. One of them is, as Terry Winograd [1979] states, to abandon our outdated view of what programming and programming languages are and start again at the beginning.
John Backus [1978], in his famous Turing Award lecture, already suggested that the "software crisis" can only be overcome by a revolutionary approach to the design of programming languages and to the task of programming itself. Although Backus advocated a functional approach, his manifesto is still in full force.
But there are not only programming languages. There are all kinds of computer languages: programming languages, specification languages, artificial intelligence languages, knowledge representation languages, database management languages, markup languages, scripting languages, and so on. The main problem is that all these languages are different and disconnected from each other. Ideally, there should be a mother language or a common conceptual core from which the different particular languages would emerge.
Multiplicity of programming paradigms
Ever since Thomas S. Kuhn [1977] used the word "paradigm" to refer to a way of looking at the world and to point out that scientific evolution occurs, not continuously, but by leaps caused precisely by paradigm shifts, this term has been imposed in the area of programming languages to indicate the different approaches or points of view regarding the conception of the programming process and the way of modeling a problem domain.
Since the beginning of computer science, programming languages have been reduced to a single conceptual primitive. But behind the chosen concept there is another series of auxiliary or second level concepts. For example, in imperative programming the conceptual primitive is the statement, but there are arithmetic, logical, control, etc. statements.
The programming paradigms that we can consider "classic", and their corresponding conceptual primitives, are:
Paradigm
Conceptual primitive
Imperative
Sentence
Functional
Function
Logical
Rule (condition → action)
Relational
Relation (table)
Objetual
Object
It was hoped that the object paradigm or object-oriented paradigm (OOP) would be the "definitive" paradigm, the most appropriate way of conceiving reality and modeling problems. But it soon became clear that it was just another paradigm, with its corresponding limitations, and also the most complex of all.
After the classical paradigms, new paradigms have emerged (also based on a conceptual primitive) such as aspect-oriented, agent-oriented, event-oriented, constraint-oriented, etc. programming. This "tower of Babel" type process seems to have no end, which has led to a more than disconcerting situation. Any of these paradigms is legitimate, but none of them is fully satisfactory, as they are not sufficiently generic. Paradigms are useful and necessary, but the problem lies in that:
Programming languages are the result of applying paradigms, which are, in all cases, more or less particular and restrictive visions or approaches that impose a restrictive conceptual model, thus limiting free expression and creativity.
All programming languages created are born disconnected from each other, without a common conceptual core.
When a new language is designed, associated to a certain paradigm, it has to be created from scratch.
So far, the "definitive" paradigm has not appeared, a universal paradigm such that the other paradigms are particular cases. This universal paradigm would only be possible with a universal formal language.
Multiparadigm languages
Most programming languages support a single paradigm, that is, a single way of modeling and looking at problems. This implies a restriction on other possible alternatives and, therefore, a brake on creative thinking by imposing a single mental model.
To overcome the mental restrictions imposed by single-paradigm languages, the so-called "multi-paradigm languages" have emerged [Hailpern, 1986, 1987] [Placer, 1991].
Multiparadigm languages are justified because there are problems that do not fit completely into a single paradigm, problems where it is appropriate to apply a certain paradigm in one part, and a different paradigm in another part. And because more room for maneuverability and freedom beyond the limitations imposed by a single paradigm is needed.
There are very many examples of paradigm integration. Here are a few:
Integration of procedural, functional, logical, relational and object-oriented paradigms: G [Placer, 1988, 1991, 1992].
Integration of imperative, functional and logical paradigms: Nial [Jenkins, 1986, 1988, 1988, 1989], [Jenkins &Glasgow, 1989] [Glasgow, 1988, 1988-1, 1991].
Integration of logical, functional and object-oriented paradigms: LIFE [Ait-Kaci & Podolski, 1991].
Integration of logical and functional paradigms: Funlog [Subrahmanyam & You, 1984], TABLOG [De Root, 1986], LEAF, LOGLISP (Lisp with logic programming support).
Integration of logic and object-oriented paradigms: SPOOL [Koschman & Evens, 1988].
Integration of functional and object-oriented paradigms: Flavors [Moon, 1986], Common Loops [Bobrow, 1986].
Integration of the relational and object-oriented paradigms: DSM [Rumbaugh, 1987].
Sometimes complete paradigms are not integrated, but only some features of a certain paradigm, for example:
The FGL language incorporates into the functional paradigm the concept of logical variable [Lindstrom, 1985].
Classical set theory and functional languages [Howe &Stoller, 1994].
Finite sets and logic programming [Dovier et al, 1993].
There are the following forms of paradigm integration:
By means of interfaces between different monoparadigm languages.
By extending a monoparadigm language to support one or more other paradigms.
In general, this would be the mere accumulation of the linguistic features or resources associated with the component paradigms.
Creating new languages that support several paradigms with a unified linguistic structure.
This last path is the most interesting, since the aim is to integrate the different paradigms by using an underlying deep semantics, extracting the generic mechanisms present at the root of all the particular paradigms. This approach is the one that can most help to understand the connections between the various paradigms that today are considered separate and disconnected.
Brent Hailpern [1986] states that this relatively new area of multi-paradigm language research is "exciting and vital," but that theoretical and practical efforts are needed before this area matures and is well understood.
This approach also avoids the so-called "impedance mismatch," which refers to the problem of the existence of different models of thought in a multi-paradigm language. With an integrative approach using only generic mechanisms, such a mismatch would not exist.
Other limitations of programming languages
Lack of global vision.
Programming languages do not provide a global view of their resources, mainly for two reasons:
Conceptual inflation.
Too many concepts are used, not very generic, sometimes not well defined, some isolated and others rarely used.
Lack of orthogonality.
Most programming languages are not orthogonal. Orthogonality is the unrestricted combinatorial capability of language resources. Languages, in this sense, have numerous exceptions. For example: a function does not admit another function as an argument, a procedure cannot be part of a data structure, etc.
The global view of a language has to do with the mode of consciousness of the right side of the brain (synthetic, holistic), an aspect that has been ignored or neglected in traditional languages, but which is fundamental for the global understanding of a language.
No conceptual reflection.
Programming languages do not allow conceptual reflection, i.e. applying a concept to itself. For example: functions of functions, rules of rules, objects of objects, etc.
Data types.
Data types in programming languages were invented for implementation purposes: for error detection and to improve performance. A universal language would not need them, because using them would limit freedom and expressiveness.
Dichotomy programming languages - specification languages.
There should be a single language that can be applied for all degrees of abstraction: from specification (the general) to programming (the detail).
Dichotomy between operational and descriptive language.
Usually programming languages are operational. Very high-level ones also include descriptive resources. But both aspects are not integrated in the same linguistic structure.
Dichotomy between data and processes.
Data and process structures are different.
Dichotomy between numeric-symbolic language and graphic language.
They are distinct and non-integrated languages. In addition there is dichotomy between the code that generates a graphic and the data file that describes the graphic.
The Semantics of Programming Languages
The term "semantics" is used in linguistics and refers to the meaning associated with the syntactic constructs of a language. In programming languages, semantics is considered to have two aspects:
Static semantics. It is the information generated at compile time.
Dynamic semantics. It is the information interpreted at run time.
The boundary between the two types of semantics is, however, somewhat fuzzy.
In formal language theory, the field of syntax is well established through the different types of grammars (which are really metalanguages). The formalization of semantics, on the other hand, has been until now, a much more complex issue, to which different solutions have been proposed, but without there being so far a standard, fully accepted model, as in the case of formal syntax.
As a consequence of this complexity, semantics has so far not played the central role it should have (in design, implementation and use), despite being the most important aspect of a programming language. This central role has been taken away from formal syntax, a subject that should be considered secondary, since there are many possible syntactic forms to represent the same semantics.
In the definition of a programming language, semantics should be the first thing to be defined, and then syntax. But the current situation is the opposite: the syntax is defined first (which is a relatively easy task) and then we try to define the semantics, almost always in a "forced" way, using some conceptual model that adapts as much as possible to the language. As this adaptation is neither easy nor usually complete, it is frequent to resort (in a complementary or substitute way) to natural language, used in a more or less literary or structured way.
The so-called "semantic gap" is the existence of a dissociation or separation between the concepts used in a programming language and the concepts used during the development of applications, both on the data and process side.
There has been a prevailing syntactic fundamentalism until now, i.e., excessive importance has been given to syntax, to the detriment of semantics. Chomsky is the main "culprit", since with his generative grammar he was the driving force behind formal syntax, without any reference to meaning. It is absolutely necessary to reverse this situation: semantics (the deep aspect of a language) comes first, and then syntax (the superficial aspect).
It is necessary to make semantics explicit. Semantics is always present in all formal languages, but the problem is that it is almost always hidden, because there is no clear and biunivocal correspondence between form (syntax) and substance (semantics).
Semantics is currently receiving increasing interest in the so-called "semantic web", the new generation of the web (a W3C initiative). The semantic web is based on a flexible and open space of concepts and relationships. The objective is to overcome the limitations of the current web (based mainly on mere explicit links) and go along the lines of emulating the mind, i.e. an intelligent web with advanced functionalities such as: semantic search, knowledge management, automation of reasoning and decision making processes, etc.
The semantic web has led to the emergence of new languages, with the consequent further growth of the linguistic Tower of Babel. These new languages are based on ontologies, which make it possible to model, describe and represent a domain. The ontology of a domain consists of the selection of a basic set of concepts and the establishment of the relationships between them.
Attempts to capture the semantics of programming languages
There have been numerous attempts to capture the semantics of programming languages. The most important formal models used so far have been the following:
Operational (or interpretive) semantics. Semantics is expressed by means of operations.
Denotational semantics. Denotations are mathematical entities called "semantic functions" that make each syntactic domain correspond to a semantic domain.
Axiomatic semantics. Semantics is expressed by axioms and rules of inference.
Algebraic semantics. Semantics is expressed by means of algebraic constructions.
Attribute grammars. These are formal grammars to which attributes are added.
Modifiable grammars. These are grammars that allow modification of the set of syntactic rules during the parsing process.
Semantic grammars. These are concept-based structured grammars.
Category theory. A category is an algebraic structure consisting of a collection of objects and connections between them (arrows or morphisms). The semantics of the objects and their relations is left open.
All these models have been unsuccessful. The reason is that it is impossible to capture the semantics of a language, because to capture it is to bring the semantics to the surface, and there the semantics ceases to be semantic because the semantics resides in the depths.
The only way to capture semantics is to define or establish a set of generic (or universal) concepts and define a syntax, not for the concepts (which are unrepresentable), but for the particular expressions (or manifestations) of those concepts.
The universal language and the universal paradigm
The evolution of computing is thus severely compromised by the absence of a universal standard language and a universal paradigm. This lack of standardization has two aspects:
Semantic.
There is no minimum set of commonly accepted, orthogonal (independent of each other), implementation-independent concepts of any machine architecture. For all these reasons, formal semantics has not reached the general public. "What is needed is a folk semantics" [Schmidt, 1997].
Syntactic.
There is a lack of a fully accepted canonical notation that is also reflective of semantics.
This standard language should also cover the operational and descriptive aspects. The ultimate goal would be to achieve a standard that allows perfect code portability (of data and processes).
The complexity and difficulty in defining a universal standard language seems to come precisely from the adoption of particular approaches and in not using "very high level" semantic resources. Only a generic approach can help to integrate the different paradigms and reduce this complexity, which is only apparent, because at the root of all paradigms must be hidden simple and universal concepts.
What we need now is a cultural rather than a technological change, a change that must necessarily be of a unifying type. We must follow Winograd's recommendation and rethink computer programming from the ground up. We need a unifying paradigm based on very high-level concepts that will allow us to recognize the root of all languages and all paradigms, so that we can define a universal formal language, which should be the foundation of the semantic web.
Computer Science vs. Mathematics
Unification
Until now, computer science and mathematics have been distinct disciplines. It is high time to try to unify both disciplines through sufficiently generic and intuitive conceptual mechanisms. Formal semantics should constitute the common foundation of mathematics and computer science.
Some integration is already taking place on different fronts:
Many mathematical concepts have been included in different programming languages. Specific languages have even been designed to model mathematical concepts.
Mathematical concepts are sometimes used to express the semantics of programs (e.g., in denotational semantics).
Many mathematical concepts conceived in a limited way have been generalized by computer science.
New concepts and very powerful expression mechanisms, used by programming languages, can enrich mathematics. For example:
The concept of "repetition" (a certain number of times).
The concepts of loop and subroutine (both invented by Ada). A loop is an operational structure: the repetition of the execution of a sequence of instructions as long as a certain condition is met. A subroutine is a sequence of instructions that can be invoked as many times as needed.
The concept of "distribution", also used by some computer languages. For example, in the specification language Z [McMorran, 1993], this concept is used, although restricted to certain operations: composition, concatenation, intersection, overlapping and union.
In mathematics, there is no specific distribution operator; rather, distribution is a property that one operation does or does not fulfill with respect to another.
The Cartesian product of sets can be considered a restricted and particular distribution operation. Indeed, the operands must necessarily be sets and the result is a set of sequences.
What we need is a generalized distribution mechanism that allows different types of arguments, that applies to different operations.
Ultimately, the ultimate goal to be pursued is the unification of both disciplines, and to make the new mathematical notation "executable", understandable by computers. One attempt in this direction is that of the J language [Iverson, 1995].
Contributions of computer science to mathematics
Since the advent of computer science, we no longer see mathematics in the same way because computer science has fertilized mathematics, providing it with a new vision, a new paradigm, a new way of looking at it that has helped to clarify it, to understand it better.
Computer science is a relatively recent science, which was born thanks to mathematics, but which has influenced, and is influencing, mathematics itself, enriching it. This process of feedback is a kind of "return of the borrowed favor", which is being realized in various ways:
Language.
The emergence of programming languages has opened up the question of the formalization of mathematical language.
The algorithmic method for formulating and solving problems.
The symbolic process, beyond numerical computation.
Foundation.
Just as in computers there is a set of instructions to perform any calculation, in mathematics there should be a set of main or primary concepts with which to express any derived mathematical concept.
Experimental mathematics, which we explain below.
Experimental mathematics
Experimental mathematics is a type of mathematical research that uses computation and graphical representation to make mathematics more tangible and visible to both the professional mathematician and the amateur. This type of mathematical doing has always been practiced by mathematicians, although it has emerged strongly since the invention of the computer. The computer has become an indispensable tool for scientific research, as the telescope is for the astronomer.
"Mathematics is not a deductive science that is a cliché. When we try to prove a theorem, we don't set up the hypotheses and then start reasoning. What we do is trial and error, experimentation, conjecture. We want to find out what the facts are, and what we do in that respect is similar to what a laboratory technician does" [Halmos, 1985].
Among the activities and purposes of experimental mathematics are:
To study mathematical structures in an attempt to discover their properties or patterns.
Testing conjectures and hypotheses. For example, Riemann's hypothesis about the structure of prime numbers.
Perform derivative calculations and automatic inferences.
Search for counterexamples.
Prove theorems.
In this last topic, the most paradigmatic example is the famous "four-color theorem" (4 colors are enough to color a map), proved with the help of a computer in 1976 by Kenneth Appel and Wolfgang Haken. This theorem needs to be verified by a computer. Ron Graham (a mathematician at Bell Labs) wondered, "If no human can verify the proof, is it really a proof?".
We do not know the source code of the program or the language in which it was written. Having a standard, canonical, universal language common to humans and computers would be a great help, because everyone would be able to judge and check whether it is a true demonstration.
Some examples of computer-aided experimental mathematical achievements are:
Chaos theory, which owes its existence to modern computer technology, since it has made it possible to visualize complex structures graphically. Edward Lorenz found his famous "attractor" in a chaotic system.
The discovery in 1995 of the Bailey-Borwein-Plouffe formula for the structure of the binary digits of π. This formula was later formally proved.
The numerical experiment called FPU (after the initials of its authors: Fermi, Pasta and Ulam), carried out in 1955, consisted in the study of a vibrating string (a nonlinear type problem), by means of computer simulation at the Los Alamos Scientific Laboratory. They found that the behavior of the system was very different from that expected by intuition. Its behavior was very complex, of the quasi-periodic type.
Computer science has not only fertilized mathematics. It has also done so with other sciences susceptible of a certain formalization. New disciplines have also derived from it, such as artificial intelligence, artificial life and Web science.
Bibliography
Ait-Kaci, H.; Podelski, A. Towards a meaning of LIFE (programming language). Third International Symposium PLILP'91 (Programming Language Implementation and Logic Programming, Passan, Germany, 26-28 Aug. 1991), pp. 255-274.
Backus, John. Can Programming be liberated from the Von Neumann stile?. A functional style and its algebra of programs. CACM, vol. 21, Ag. 1978, pp. 613-641.
Bobrow, D.G.; Kahn, K.; Kiczales, G.; Masinter, L.; Stefik, M.; Zdybel, F.. CommonLoops: mergins Lisp and object-oriented programming. OOPSLA'86: Special Issue of SIGPLAN Notices 21: 17-29, 1986.
Dovier, A. ; Omodeo, E.G. ; Pontelli, E. ; Rossi, G. Embedding finite sets in a logic programming language. Proceedings of ELP'92 (Third International Workshop Extensions of Logic Programming, Bologna, Italy, 26-28 Feb. 1992), pp. 150-167.
Glasgow & Jenkins. Array theory, logic and the Nial language. Proceedings 1988 International Conference on Computer Language (IEEE Cat. Nº 88CH2647-6), pp. 296-303.
De Root, D.; G. Lindstrom, G. (Eds). Logic Programming: Functions, Relations and Equations. Prentice-Hall, Englewood Cliffs, N.J., 1986.
Glasgow, J. The logic programming paradigm in Nial. Proc. of the Fourth Annual A.I. and Advanced Computer Technology Conf., pp. 13-22, Conferencia 4-6 Mayo 1988.
Glasgow, Jenkins, Blevis y Feret. Logic Programming with arrays. IEEE Transactions on Knowledge and Data Engineering, 3-3, pp. 307-319, Sep. 1991.
Hailpern, B. Multiparadigm languages and environments. IEEE Software 3:6-9, 1986.
Hailpern, B. Design of a multiparadigm language. Notes from a session by Dr. Hailpern at IBM Thomas J. Watson Research Center, Yorktown Heights, N.Y., 1987.
Halmos, Paul. I want to be a mathematician: An automathography. Springer-Verlag, 1985.
Howe; D.J.; Stoller, S.D. An operational approach to combining classical set theory and functional programming languages. Theoretical Aspects of Computer Software. International Symposium TACS'94 Proccedings, pp. 36-55. Conference: Sendai (Japan), 19-22 April 1994. Springer-Verlag, Berlin, 1994, ISBN 3 540 57887 0.
Hull, Richard; Roger King, Roger. Semantic database modeling: survey, applications, and research issues. ACM Computing Surveys, 19-3 (1987), pp. 201-260.
Jenkins, M.A. Prototyping intelligent database applications in Nial. Proc. of the Forth Annual A.I. and Advanced Computer Technology Conf., pp. 23-34, Conferencia 4-6 Mayo 1988.
Jenkins, M.A.; Glasgow, J.I. A logical base for nested array data structures. Computer Languages, 14-1, pp. 35-1, 1989.
Jenkins, M.A. Q' Nial: a portable interpreter for Nested Interactive Array Language. Software - Practice and Experience, 19-2, pp. 111-126, Feb. 1989.
Koschmann, T.; Evens, M.W. Bridging the gap between object-oriented and logic programming. IEEE Software 5: 36-42, 1988.
Kuhn, Thomas S. La estructura de las revoluciones científicas. Fondo de Cultura Económica, 1977.
Lindstrom, G. Functional programming and the logical variable. Twelfth Annual ACM Symposium on Principles of Programming Languages, pp. 266-279, 1985.
Moon, D. Object-oriented programming with flavors. OOPSLA'86: Special Issue of SIGPLAN Notices 21: 1-8, 1986.
Placer, John. Multiparadigm research: a new direction in language design.
SIGPCAN Notices, vol. 26, nº. 3, pp. 9-17, March 1991.
Placer, John. G: a language based on demand-driven stream evaluations. Ph. D. dissertation at Oregon State University, 1988.
Placer, John. The multiparadigm language G. Computer Languages, vol. 16, nº. 3-4, pp. 235-258, 1991.
Placer, John. Integrating destructive assignment and lazy evaluation in the multiparadigm language G-2. SIGPLAN Notices, 27 (2): 65-74, Feb. 1992.
Rumbaugh, J. Relations as semantic constructs in an object-oriented language. OOPSLA'87: Special Issue of SIGPLAN Notices 22: 466-481, 1987.
Schmidt, David A. On the Need for a Popular Formal Semantics. ACM Sigplan Notices, 32 (1), Enero 1997, pp. 115-116.
Subrahmanyam, P.A.; J. You, J. Pattern driven lazy reduction: a unifying evaluation mechamism for functional and logic programs. Eleventh Anuval ACM Symp. on Principles of Programming Languages, pp. 228-234, 1984.
Winograd, Terry. Beyond Programming Languages. Communications of the ACM, 22: 7, Julio 1979.