MENTAL
 Main Menu
 Problematic
 Limitations of Computer Science


Limitations of Computer Science
 LIMITATIONS OF
COMPUTER SCIENCE

"Mathematics is interested in infinite systems, while computer science is not." (Donald Knuth)

"Mathematics deals with theorems and static relations, while Computer Science deals with algorithms and dynamic relations." (Donald Knuth)



The Nature of Computer Science

Computer science was born from the convergence of several disciplines: The development of computers was the culmination of several key ideas: But we cannot forget Ada Lovelace was the first person to write a computer program, taking as a basis the functional design of Babbage's analytical machine.

Ada advanced several important concepts, which would become reality years later. She invented the concept of the universal machine to perform numerical computations and symbolic processes. She also invented the concept of program or algorithm, with three structures: subroutines, loop and conditional branching. In short, Ada made the conceptual leap from the calculating machine to the general-purpose computer. The computer as we think of it today (a universal device) was imagined by Ada and later developed by Turing and Von Neumann.

A distinction must be made between theoretical computer science and practical computer science. Theoretical computer science was born with Boole, Ada Lovelace, Turing and Shannon. Practical computer science was born with von Neumann. Turing and Shannon also acted as bridges between theoretical and practical computer science.

Despite all this known background, there is debate about the true nature of computer science. This is justifiable, for if there are doubts about the nature of mathematics, it would seem logical that there should also be doubts about the nature of computer science, which has largely inherited from mathematics. There are different opinions on this subject, among them the following:
Programming Languages

The software crisis. The crisis of programming languages

In 1968, during the NATO Software Engineering Conference, the term "software crisis" was coined to indicate that software was difficult to create, maintain, plan, control, inflexible, unreliable and unusable. Many years have passed since then, but the problem remains, as no fully satisfactory solution has been found.

An essential aspect of the development of computer applications is the programming language, which continues to be really frustrating, mainly because of the thinking model it imposes. We must really speak of the crisis of language, rather than of software, a crisis that will not be overcome until we have a standard, universal, powerful, flexible, high-level, creative, easy-to-learn and easy-to-use language.

Since the origins of computing, many programming languages have been invented and are still being invented. And although there are many reasons for doing so, such as the application of different points of view, paradigms, models, etc., the main underlying reason is dissatisfaction. A programming language has not yet been found that fully fits, that fits our model of thinking, so that there is perfect correspondence between our internal semantic resources and the concepts provided by the programming language.

There have been, throughout the relatively short history of programming languages, a number of milestones that have been important breakthroughs by providing new ideas that have subsequently been adopted by other languages. Among these languages are Pascal, Algol 68, Simula 67, Lisp, AWK, Snobol and Prolog. In recent years there has been a revival of interest in the design of new programming languages. As a result of this interest, new languages such as CLU, Euclid, Mesa, etc. have emerged.

Today, new general-purpose programming languages continue to be invented that try to overcome the limitations of the current ones, adopting new paradigms or integrating existing paradigms, with the aim of providing greater generality, expressiveness and flexibility.

There are many alternatives to improve this situation. One of them is, as Terry Winograd [1979] states, to abandon our outdated view of what programming and programming languages are and start again at the beginning.

John Backus [1978], in his famous Turing Award lecture, already suggested that the "software crisis" can only be overcome by a revolutionary approach to the design of programming languages and to the task of programming itself. Although Backus advocated a functional approach, his manifesto is still in full force.

But there are not only programming languages. There are all kinds of computer languages: programming languages, specification languages, artificial intelligence languages, knowledge representation languages, database management languages, markup languages, scripting languages, and so on. The main problem is that all these languages are different and disconnected from each other. Ideally, there should be a mother language or a common conceptual core from which the different particular languages would emerge.


Multiplicity of programming paradigms

Ever since Thomas S. Kuhn [1977] used the word "paradigm" to refer to a way of looking at the world and to point out that scientific evolution occurs, not continuously, but by leaps caused precisely by paradigm shifts, this term has been imposed in the area of programming languages to indicate the different approaches or points of view regarding the conception of the programming process and the way of modeling a problem domain.

Since the beginning of computer science, programming languages have been reduced to a single conceptual primitive. But behind the chosen concept there is another series of auxiliary or second level concepts. For example, in imperative programming the conceptual primitive is the statement, but there are arithmetic, logical, control, etc. statements.

The programming paradigms that we can consider "classic", and their corresponding conceptual primitives, are:

ParadigmConceptual primitive
ImperativeSentence
FunctionalFunction
LogicalRule
(condition → action)
RelationalRelation (table)
ObjetualObject

It was hoped that the object paradigm or object-oriented paradigm (OOP) would be the "definitive" paradigm, the most appropriate way of conceiving reality and modeling problems. But it soon became clear that it was just another paradigm, with its corresponding limitations, and also the most complex of all.

After the classical paradigms, new paradigms have emerged (also based on a conceptual primitive) such as aspect-oriented, agent-oriented, event-oriented, constraint-oriented, etc. programming. This "tower of Babel" type process seems to have no end, which has led to a more than disconcerting situation. Any of these paradigms is legitimate, but none of them is fully satisfactory, as they are not sufficiently generic. Paradigms are useful and necessary, but the problem lies in that: So far, the "definitive" paradigm has not appeared, a universal paradigm such that the other paradigms are particular cases. This universal paradigm would only be possible with a universal formal language.


Multiparadigm languages

Most programming languages support a single paradigm, that is, a single way of modeling and looking at problems. This implies a restriction on other possible alternatives and, therefore, a brake on creative thinking by imposing a single mental model. To overcome the mental restrictions imposed by single-paradigm languages, the so-called "multi-paradigm languages" have emerged [Hailpern, 1986, 1987] [Placer, 1991].

Multiparadigm languages are justified because there are problems that do not fit completely into a single paradigm, problems where it is appropriate to apply a certain paradigm in one part, and a different paradigm in another part. And because more room for maneuverability and freedom beyond the limitations imposed by a single paradigm is needed.

There are very many examples of paradigm integration. Here are a few: Sometimes complete paradigms are not integrated, but only some features of a certain paradigm, for example: There are the following forms of paradigm integration:
  1. By means of interfaces between different monoparadigm languages.

  2. By extending a monoparadigm language to support one or more other paradigms.

    In general, this would be the mere accumulation of the linguistic features or resources associated with the component paradigms.

  3. Creating new languages that support several paradigms with a unified linguistic structure.
This last path is the most interesting, since the aim is to integrate the different paradigms by using an underlying deep semantics, extracting the generic mechanisms present at the root of all the particular paradigms. This approach is the one that can most help to understand the connections between the various paradigms that today are considered separate and disconnected.

Brent Hailpern [1986] states that this relatively new area of multi-paradigm language research is "exciting and vital," but that theoretical and practical efforts are needed before this area matures and is well understood.

This approach also avoids the so-called "impedance mismatch," which refers to the problem of the existence of different models of thought in a multi-paradigm language. With an integrative approach using only generic mechanisms, such a mismatch would not exist.


Other limitations of programming languages

The Semantics of Programming Languages

The term "semantics" is used in linguistics and refers to the meaning associated with the syntactic constructs of a language. In programming languages, semantics is considered to have two aspects:
  1. Static semantics. It is the information generated at compile time.

  2. Dynamic semantics. It is the information interpreted at run time.
The boundary between the two types of semantics is, however, somewhat fuzzy.

In formal language theory, the field of syntax is well established through the different types of grammars (which are really metalanguages). The formalization of semantics, on the other hand, has been until now, a much more complex issue, to which different solutions have been proposed, but without there being so far a standard, fully accepted model, as in the case of formal syntax.

As a consequence of this complexity, semantics has so far not played the central role it should have (in design, implementation and use), despite being the most important aspect of a programming language. This central role has been taken away from formal syntax, a subject that should be considered secondary, since there are many possible syntactic forms to represent the same semantics.

In the definition of a programming language, semantics should be the first thing to be defined, and then syntax. But the current situation is the opposite: the syntax is defined first (which is a relatively easy task) and then we try to define the semantics, almost always in a "forced" way, using some conceptual model that adapts as much as possible to the language. As this adaptation is neither easy nor usually complete, it is frequent to resort (in a complementary or substitute way) to natural language, used in a more or less literary or structured way.

The so-called "semantic gap" is the existence of a dissociation or separation between the concepts used in a programming language and the concepts used during the development of applications, both on the data and process side.

There has been a prevailing syntactic fundamentalism until now, i.e., excessive importance has been given to syntax, to the detriment of semantics. Chomsky is the main "culprit", since with his generative grammar he was the driving force behind formal syntax, without any reference to meaning. It is absolutely necessary to reverse this situation: semantics (the deep aspect of a language) comes first, and then syntax (the superficial aspect).

It is necessary to make semantics explicit. Semantics is always present in all formal languages, but the problem is that it is almost always hidden, because there is no clear and biunivocal correspondence between form (syntax) and substance (semantics).

Semantics is currently receiving increasing interest in the so-called "semantic web", the new generation of the web (a W3C initiative). The semantic web is based on a flexible and open space of concepts and relationships. The objective is to overcome the limitations of the current web (based mainly on mere explicit links) and go along the lines of emulating the mind, i.e. an intelligent web with advanced functionalities such as: semantic search, knowledge management, automation of reasoning and decision making processes, etc.

The semantic web has led to the emergence of new languages, with the consequent further growth of the linguistic Tower of Babel. These new languages are based on ontologies, which make it possible to model, describe and represent a domain. The ontology of a domain consists of the selection of a basic set of concepts and the establishment of the relationships between them.


Attempts to capture the semantics of programming languages

There have been numerous attempts to capture the semantics of programming languages. The most important formal models used so far have been the following: All these models have been unsuccessful. The reason is that it is impossible to capture the semantics of a language, because to capture it is to bring the semantics to the surface, and there the semantics ceases to be semantic because the semantics resides in the depths.

The only way to capture semantics is to define or establish a set of generic (or universal) concepts and define a syntax, not for the concepts (which are unrepresentable), but for the particular expressions (or manifestations) of those concepts.


The universal language and the universal paradigm

The evolution of computing is thus severely compromised by the absence of a universal standard language and a universal paradigm. This lack of standardization has two aspects:
  1. Semantic.
    There is no minimum set of commonly accepted, orthogonal (independent of each other), implementation-independent concepts of any machine architecture. For all these reasons, formal semantics has not reached the general public. "What is needed is a folk semantics" [Schmidt, 1997].

  2. Syntactic.
    There is a lack of a fully accepted canonical notation that is also reflective of semantics.
This standard language should also cover the operational and descriptive aspects. The ultimate goal would be to achieve a standard that allows perfect code portability (of data and processes).

The complexity and difficulty in defining a universal standard language seems to come precisely from the adoption of particular approaches and in not using "very high level" semantic resources. Only a generic approach can help to integrate the different paradigms and reduce this complexity, which is only apparent, because at the root of all paradigms must be hidden simple and universal concepts.

What we need now is a cultural rather than a technological change, a change that must necessarily be of a unifying type. We must follow Winograd's recommendation and rethink computer programming from the ground up. We need a unifying paradigm based on very high-level concepts that will allow us to recognize the root of all languages and all paradigms, so that we can define a universal formal language, which should be the foundation of the semantic web.


Computer Science vs. Mathematics

Unification

Until now, computer science and mathematics have been distinct disciplines. It is high time to try to unify both disciplines through sufficiently generic and intuitive conceptual mechanisms. Formal semantics should constitute the common foundation of mathematics and computer science.

Some integration is already taking place on different fronts: New concepts and very powerful expression mechanisms, used by programming languages, can enrich mathematics. For example: Ultimately, the ultimate goal to be pursued is the unification of both disciplines, and to make the new mathematical notation "executable", understandable by computers. One attempt in this direction is that of the J language [Iverson, 1995].


Contributions of computer science to mathematics

Since the advent of computer science, we no longer see mathematics in the same way because computer science has fertilized mathematics, providing it with a new vision, a new paradigm, a new way of looking at it that has helped to clarify it, to understand it better.

Computer science is a relatively recent science, which was born thanks to mathematics, but which has influenced, and is influencing, mathematics itself, enriching it. This process of feedback is a kind of "return of the borrowed favor", which is being realized in various ways:
Experimental mathematics

Experimental mathematics is a type of mathematical research that uses computation and graphical representation to make mathematics more tangible and visible to both the professional mathematician and the amateur. This type of mathematical doing has always been practiced by mathematicians, although it has emerged strongly since the invention of the computer. The computer has become an indispensable tool for scientific research, as the telescope is for the astronomer.

"Mathematics is not a deductive science that is a cliché. When we try to prove a theorem, we don't set up the hypotheses and then start reasoning. What we do is trial and error, experimentation, conjecture. We want to find out what the facts are, and what we do in that respect is similar to what a laboratory technician does" [Halmos, 1985].

Among the activities and purposes of experimental mathematics are: In this last topic, the most paradigmatic example is the famous "four-color theorem" (4 colors are enough to color a map), proved with the help of a computer in 1976 by Kenneth Appel and Wolfgang Haken. This theorem needs to be verified by a computer. Ron Graham (a mathematician at Bell Labs) wondered, "If no human can verify the proof, is it really a proof?".

We do not know the source code of the program or the language in which it was written. Having a standard, canonical, universal language common to humans and computers would be a great help, because everyone would be able to judge and check whether it is a true demonstration.

Some examples of computer-aided experimental mathematical achievements are: Computer science has not only fertilized mathematics. It has also done so with other sciences susceptible of a certain formalization. New disciplines have also derived from it, such as artificial intelligence, artificial life and Web science.



Bibliography