"The dream behind the Web is a common information space in which to communicate by sharing information" (Tim Berners-Lee).
"The decision to make the Web an open system was necessary to make it universal" (Tim Berners-Lee).
The Web and its Limitations
The Science of the Web
The Web has become of enormous importance to society as a whole, and is transforming it, affecting practically all aspects of society: social communications, public administration, education, business management, commerce, entertainment, news, documentation, teleworking, banking, travel, etc. It has become a universal encyclopedia of information and information about the world. It has become a universal encyclopedia of human knowledge and a universal platform for the deployment of information and applications.
The success of the Web has been due to a variety of reasons, including:
Its ease of use.
The integration of all kinds of digital contents: texts, sounds, images, video, maps, etc.
The ease of jumping from one content to another, thanks to hyperlinks.
Its capacity for immediate dissemination of information. The publication of information makes it immediately available worldwide.
Its decentralization.
The possibility of executing interactive applications, thanks to the fact that Web pages can host scripts.
The possibility of interacting with databases residing on servers. In particular, queries can be made through a Web form and dynamically generating Web pages in response.
The future of society will increasingly depend on the Web, as the ultimate goal will be to contain virtually all elements of information, static and dynamic, data and processes, and increasingly interrelated.
The importance is so great that there is a science that studies the Net from different points of view: technological, social, contents, applications, etc. This science is called "Web Science", a science that the W3C (World Wide Consortium) is trying to develop, since the Web does not have a clear foundation and needs to be studied and built according to that foundation. This science is clearly interdisciplinary, as it studies:
The types of contents and their relationships.
Network structure models.
The technological infrastructure: languages, communication protocols, etc.
Its impact on social habits.
The Net as a model of global consciousness.
etc.
Social sensitivity to the Web is so great that a small innovation can trigger a social phenomenon of great magnitude. This happened, for example, with blogs or with the emergence of social networks such as Twitter and FaceBook.
Web technologies
The Web uses a wide variety of languages, models and technologies. The most important of these are:
HTML (Hypertext Markup Language).
HTML was the first language of the Web, its "mother" language. It is oriented to describe the structure and content of Web pages and the definition of links between pages. It has been, and still is, a decisive language in the development of the Web. It is a language that hosts or connects with other languages: scripting languages, such as JavaScript and VBScript, and programming languages, such as Java.
XML (eXtended Markup Language).
Language for defining data structures and metadata.
DOM (Document Object Model).
DOM is a W3C standard model that defines the internal object structure of HTML and XML documents. It includes an application programming interface (API) for accessing and managing these objects within HTML and XML documents. Through the DOM, programs can access and modify the content, structure and style of HTML and XML documents.
ECMAScript.
ECMAScript is a scripting language specification published by ECMA International. It is based on Netscape's popular JavaScript language. It is currently an ISO standard. Most Internet browsers include an implementation of ECMAScript, as does DOM, for accessing and manipulating Web pages. Browsers often have their own ECMAScript extensions.
But today's Web is far from being a perfect environment for information sharing. Among its many limitations are the following:
It is not homogeneously structured. The sources of information are heterogeneous.
The relationships between information elements are links, which are very restricted:
They are physical addresses that change when an information element changes its physical location.
They are of the 1:1 type, that is, they link one element to another element.
They are one-way (arrows pointing from an origin to a destination).
They are explicit. They must be specified at the origin.
There are no higher order links (links of links), nor groups of links.
They cannot be assigned attributes.
The elements that are related (linked) are Web pages. You cannot link any types of information elements.
There is no common, unified language to encode all types of information elements. This language should also be transparent, with the same codification at the external and internal level, and understandable to all. This language should incorporate a conceptual model.
Web documents include internal (or intrinsic) metadata, i.e., embedded within Web pages (in "meta" tags). As a consequence, it is difficult and costly to extract information.
Web sites are not categorized. No distinction is made, for example, between a personal page, the portal of an online store, a training center, etc.
The search of the contents is by words or phrases, but does not allow more elaborate or structured queries.
Databases cannot be searched directly. The databases are hidden.
There is a conceptual separation or dichotomy between the user's environment (on their PC) and the Web. There should be a local (user's) Web and a global Web, both with the same structure and technology.
The Semantic Web
The concept
The Semantic Web is an idea proposed in the late 1990s by the inventor of the Web, Tim Berners-Lee [2001]. It is a W3C project to create a new generation of the Web. The idea is to overcome the limitations of the current Web (based mainly on explicit links) and move towards a Web based on meaning, semantics, a conceptual and intelligent Web, with homogeneous, well-defined and structured information.
Although machines do not "know" what semantics is, nor are they aware of it, they only manipulate symbols at a superficial level, it is necessary to try to achieve a mind-machine semantic synchronization, eliminating the semantic gap, so that there is coherence and coordination between humans and the way of organizing the content in machines. In short, to establish a correspondence or analogy between the "mind" of the machine and that of man. And to go towards a greater consciousness of the machines, because semantics is consciousness, and the higher the semantic level, the greater the consciousness.
The intended or desired functionalities of the Semantic Web are:
Establishment of relationships between all types of objects or information elements (documents, images, attributes, applications, classes of objects, abstract concepts, etc.).
Semantic search.
Categorization of content using external (or extrinsic) metadata, i.e., independent of the information object, but maintaining a persistent link to the object.
Better integration and coherent and consistent interrelation of different resources.
More efficient and secure processes.
Automation of operations and optimization of results.
Definition of intelligent agents.
Resource sharing and reuse.
Automatic inferences through "reasoners", programs that discover relationships between different pieces of information.
Automatic decision making.
Cooperative work.
Knowledge management.
Control of the way information is presented and individualized answers, depending on the user or type of user.
Semantic Web services that can be used by all resources, including other Web services.
Web space is no longer a space for hosting Web pages, but a semantic or mental space.
Each Web space is an active semantic space, with an active virtual agent representing a real user. This is intended to improve interoperability.
Personal Web support.
Communication between Web spaces.
Semantic search of Web resources.
Semantic metadata to categorize content, even within Web pages.
Key technologies are: XML (data structure definition), RDF (metadata definition) and OWL (ontology definition).
Foundation by ontologies, a concept widely used in AI (artificial intelligence). An ontology is a set of concepts and relationships between them to model, ground, describe and represent a domain. The components of an ontology are classes, properties, individuals and relationships between classes. For example, an ontology about Painting might include: Classes such as Painter, Painting, Style, and Museum; relationships such as Painting-Author, Painter-Style, etc.; Instances of painters, paintings, museums, and styles.
The Semantic Web promises to "organize the world's information" in a more logical way. This new generation of the has sparked a growing interest in the subject of semantics, which has led to the emergence of new languages, especially those based on ontologies, with the consequent further growth of the linguistic Tower of Babel.
Some authors, such as Andrew Keen, author of The Cult of the Amateur [2007], believe that the Semantic Web is an "unworkable abstraction" and will never work.
Semantic Web technologies
Semantics is being built on a set of different languages, more or less interrelated, without a single common and universal semantic base language. And without such a language it is not possible (or it is very complex) to base and build Semantics. Although a multitude of languages have proliferated in an attempt to develop Semantics, those standardized or recommended by the W3C are the following:
XML. Language for the definition of data structures and metadata.
XMLS. Language for defining XML data types.
RDF. Language to define "triples" Subject-Predicate-Object.
RDFS. Language for defining RDF data types.
OWL. Language to define ontologies.
SPARQL. Language for querying RDF databases.
Ideally, there would be a single universal language capable of expressing all types of content and relationships.
The problems of the Semantic Web
As a conclusion, the technology platform of choice for the Semantic Web is:
Confused and not very coherent. For it relies on a set of languages without a common semantic foundation. For example, ontologies are defined on two languages: RDF and OWL. Moreover, coding in RDF and OWL is extremely difficult and error-prone.
Complex. While XML is simple, RDF is complex. And what is complex is that it is poorly thought out. And it is poorly thought out for wanting to turn XML into a "dogma" (when there are semantics that do not adapt to this syntax) and for pretending to address the language issue for Semantics in a partial way, on the basis of a restrictive language like XML. Its restrictive syntax limits combinatorics.
Insufficient. Because it does not include a programming language and even less that allows expressing all paradigms. This forces to resort to different traditional languages for the development of applications.
Non-orthogonal. That is, resources cannot be freely combined (and many do not even exist), for example:
You cannot assign attributes to functions, to rules, etc.
You cannot create functions that produce rules as a result.
You cannot create function sets or rule sequences.
etc.
There is also no full conceptual recursion: rules of rules, functions of functions, attributes of attributes, ontologies of ontologies, etc. Orthogonality and conceptual recursion imply freedom, flexibility and creativity.
Therefore, the path taken by Semantics is, in our opinion, clearly wrong. The problem is that it is intended to approach the subject of Semantics with the old ideas and with the usual restrictive languages. The proof is that the initial general ideas on Semantics date back to the late 1990s, and since then little progress has been made.
MENTAL, a Language for the Semantic Web
The solution is to redirect the issue in the sense of defining a unified semantic-based language, with simple, universal and clear concepts as the foundation for everything else. Since we are defining a Semantic, the language must be essentially semantic in nature, but of the deepest possible semantics, a language of artificial intelligence, free of restrictions, orthogonal, in which resources can be freely combined according to semantic dimensions or degrees of freedom.
Therefore, the technologies chosen for the Semantic Web are useless. The alternative we propose here is to use MENTAL, a language based on universal semantic primitives, which overcomes all the above limitations, offering a whole series of advantages:
It is a humanistic, expressive, high level of abstraction, compact, open, extensible, flexible and creative language, with clear basic concepts, easy to learn and use.
Capable of expressing all types of programming paradigms, classic and new ones (event-driven, constraint-driven, aspect-driven, agent-driven, etc.).
Transparent. The code is always known at all levels.
With simple and flexible syntax, because it can be modified, even dynamically.
Capable of dynamically extending the base semantics.
Orthogonal, i.e., with non-restrictive combinatorics.
Allows to handle all types of expressions: static, dynamic, generic, specific, descriptive, operational, virtual, linked, shared, imaginary, etc.
It is a metalanguage with which new languages can be defined.
It allows the universal connection and interrelation between all kinds of information elements, beyond the simple link: data structures, programs, procedures, functions, rules, objects, text pages, sentences, words, and even characters. These connections are possible because all elements are expressions.
It allows the sharing of all kinds of expressions, thus favoring reuse.
MENTAL would be the common or unifying factor, in a double aspect:
At a deep level, it is the operating system, the universal semantics of the language.
At the superficial level, it is the abstract space, the manifest. The Web would be the abstract space of MENTAL, that is, the network of relationships and interrelationships between the different expressions of the abstract space.
Moreover, everything would have the same structure. That is, the abstract space of an individual (local) computer would have the same structure as the global or Web one. Everything would be governed by the same laws: the universal semantic primitives.
Beyond the Semantic Web
MENTAL goes beyond the objectives set out in the Semantic Web project. Some of the new possibilities would be:
Universal interrelation. Everything would be interrelated, diluting the concept of page.
Universal services, accessible by the rest of the resources. They can be of high or low level. For example, calculating the square root of a number, sorting records, obtaining the stores that sell a certain item, etc.
Automatic creation of logical or virtual links based on the corresponding attributes of each data element.
Automatic categorization of information elements.
Dependency relationships, with automatic modifications. If an element changes, the dependents of it automatically change.
Automatic valuations (qualitative and quantitative) of the information elements.
Expert knowledge in the form of rules, meta-rules and other generic expressions.
Virtual websites built from other websites.
Websites automatically generated from other websites.
Semantic links.
They would consist of a structure of attributes (may not be unique), by means of which an expression would be connected to other n:m type expressions (several to several).
These links would be dynamic, regardless of their physical address, since they only depend on the attribute structures. By changing the attributes, they would change the links automatically.
Link expressions can be defined by a selection process.
These links could be explicit or implicit, but all of them based on the same attribute philosophy.
Consciousness, semantics and MENTAL
The Internet, and in particular the Web, is becoming the brain of the planet and a metaphor for the mind and consciousness. Indeed, there are analogies between the and certain characteristics that are associated or related to consciousness:
Connection.
Consciousness is the capacity for relationship or connection. The greater the connection, the greater the consciousness.
The Web is a network of connections.
Transcendence of space and time.
In pure or transcendental consciousness, there is no space and time.
In the Web, the limits of space and time are transcended, in two senses:
Because it can be accessed from anywhere and at any time.
Because everything is non-local, as in quantum physics, that is, there are no distances and there is no time, because the answers are practically instantaneous.
Unity.
Consciousness is perception of the underlying unity behind diversity.
The Web is a global, common, information-sharing repository or environment that is viewed as a unit.
Fractality.
Consciousness and mind are of fractal type, that is, they are based on the same mechanisms at all levels.
The Web has a fractal structure, not from the point of view of the contents, but from the point of view of the technological structure, which is always the same at all levels of the Web .
Simplicity.
Awareness is closely linked to simplicity. The greater the simplicity, the greater the consciousness, because simplicity connects us with the essential and the profound, where everything is related.
The Web is simple to use and simple to develop.
With the Semantic Web it is intended to advance in this line of emulation of the global consciousness or mind, but with MENTAL this objective is closer, because:
The application of MENTAL as a model of the mind will produce a mental Web.
MENTAL is a language of archetypes of consciousness. Therefore, the Web would be beyond the Semantic Web.
Everything would become simpler, easier and more convenient. Language would contribute to global awareness, because maximum awareness is achieved with maximum simplicity.
Semantics is a key piece for the progress of the information society and its evolution towards the knowledge society, but with MENTAL perhaps it can go further: towards the society of wisdom and transcendence.
Addenda: The Evolution of the Web
Web 1.0
It is the traditional Web invented by Tim Berners-Lee in 1989:
It is a centralized information source. It is a read-only Web for users. Users cannot interact with the content of the page (include comments, rate the page, etc.). Writing is reserved only for specialized publishers who decide what is available online. Readers and writers are disconnected.
Pages and links are static and infrequently updated.
The fundamental tool is the browser.
There is little or no interaction between websites.
Uses HTTP, URI, HTML, CSS, XML and XHTML web standards.
With downloadable components in the client: ActiveX and Java.
Search engines (Altavista, Yahoo, etc.) ignore the relevance or weight of web pages.
Use HTML forms to send emails.
Other optional features: use frames and guestbooks.
Typical example: Encyclopedia Britannica.
Web 2.0
It is today's web. The term "Web 2.0" was coined by Darcy DiNucci in his article "Fragmented Future" [1999], who saw the Web as an interactive medium accessible from different devices (PCs, mobiles, consoles, etc.). Its definition is still fuzzy, but certain characteristics can be defined:
It goes beyond static pages and information retrieval. It is a participatory, democratic, read/write web for everyone, publishers and users. The distinction between users and publishers is diluted. Users are "prosumers", i.e. producers and consumers at the same time. It is really the vision of the inventor of the Web: a collaborative medium, an integrated space for meeting, reading, writing and running applications. Web 2.0 is also called "Social Web" because of the collaborative approach, where users are the most important elements.
User contributions are of several types: wiki, blogs, social networks, etc. Wiki is a website that allows users to add, delete and edit content, like a collaborative encyclopedia (in Hawaiian, "wiki" means "fast."). Social networks allow users to interact with each other via virtual communities of users.
The web is a universal, inclusive, standards-based platform. It is a service-oriented web, with web applications.
Web sites can communicate, usually in XML format.
Applications are built on top of the web. In this sense, it includes what is now called "Cloud Computing", because in this philosophy the software runs on the web platform, which already contemplates Web 2.0.
Pages can be dynamic, with content based on databases or files.
Allows updating databases via forms.
Offers hosting services for web pages and databases.
Usa mashups. A mashup is a web page or application that uses and combines data, presentations and functionality from one or more sources to create a new service.
Allows free classification of information ("folksonomies", user-made taxonomies (falk taxonomies, falksonomies).
Allows meta-data or tags to categorize content and facilitate searches.
Uses technologies to generate dynamic and interactive pages such as: Ajax (Asynchronous JavaScript and XML), ASP, JSP, PHP, Ruby, Perl and Python.
Search engines (such as Google) rely on algorithms that consider page ranks.
Uses content syndication technologies (such as RSS) to notify users of new content.
Examples: 1) Wikipedia (collaborative encyclopedia; 2) Flickr (photo publishing and sharing); 3) Napster (now defunct service), a music file distribution, sharing and exchange service (in MP3 format) via P2P (peer to peer).
Web 3.0
The term "Web 3.0" first appeared in an article by Jeffrey Zeldman [2006], a critic of Web 2.0. Although it is sometimes identified with the Semantic Web, it is a conception that pretends to be different, but there is still no consensus on its meaning and definition.
It is a path towards the intelligent Web and convergence with AI. With programs that can reason and with intelligent agents. Applications and agents exchange data, process it and even make inferences to generate new information.
Raises the semantic level of information. It uses semantic metadata and ontologies.
It is a Web of data. It is about linking data as Web pages are linked. This Web of data is a step towards the Semantic Web.
It is a path towards the 3D Web or geospatial Web, a topic led by the Web3D Consortium. It is the transformation of the Web into a series of 3D spaces, beyond the concept proposed by Second Life. Second LifeSecond Life (Second Life); is a program launched on June 23, 2003 that can be accessed free of charge on the Internet. Each user has an avatar and users can interact with each other through interfaces called "viewers". Residents can thus explore the virtual world, interact with other residents, establish social relationships, engage in individual and group activities, etc.
Contents are more accessible.
Attempts to unify the disparity of Web standards (XML, RDF, OWL, etc.).
John Smart, in his Internet Metaverse Roadmap, defines Web 3.0 as the first generation of the Metaverse, with the following characteristics:
It is a 3D Web, with 3D spaces and environments, like Second Life, the best known metaverse.
It is a convergence or fusion of the physical world and the virtual world. A meeting between geospatial maps and virtual worlds. A virtual Earth to navigate the physical Earth. Physical reality virtually extended (augmented reality), i.e., new layers of persistent information about perceptions of the physical world.
Each user has an avatar, which becomes a personal online agent. The avatar is the personification of the user in the virtual world. The user acts directly on the environment or indirectly with the avatar. The communication between avatars is done through a "Conversational Interface" (CI).
The boundary between the real and the virtual is blurred. Reality-shaping technologies offer new virtual environments. Companies and institutions become virtual. Simulations become real.
The distinction between game and social world is diluted. It is a social space of interaction with which social experiences are gained. The Web as an "online life". Simulated worlds as the space for interaction.
New forms of partnership and collaboration.
It is a space of greater freedom than the physical world. It allows to easily create virtual objects (as in Second Life), create networks of objects and people, with interactions between them, etc.
The Web of Things (or Internet of Things). With blogjects (object blogs). With physical hyperlinks (Physical Hyperlinks, PH) such as the two-dimensional QR barcode (Quick Response), a code readable by 3G phones.
It allows a quick and easy learning.
With the possibility of syndication in the virtual world.
With location-aware systems, such as Foursquare, which geolocate the user through their fixed or mobile device.
Augmented reality with context-sensitive sensors.
With semantic type standards.
There are inner technologies (for accessing the Web directly or indirectly through the avatar) and outer technologies, which provide information from the user's external world.
Mirror worlds. They are informationally extended mirror worlds or models of the physical world. The best example of a mirror world is Google Earth, a digital map of the Earth accessible via the Web, a GIS (Geographic Information System) type mirror world.
Vital records (lifelogging). They record and report the internal states and vital histories of objects and users. It is the "documented life" for later examination. At the object level they are blogjects, information shadows, spimes, etc. An information shadow is a set of metadata associated with an intelligent product or service. A spime −a neologism created by Bruce Sterling− is a named object that is tracked through space and time in its life cycle.
Web 4.0
It is a Web concept with the following features:
Web is the operating system (WebOS).
Links to the Web Science (the Science of the Web).
It is an "intelligent" Web, where we can find more relevant or meaningful information.
It includes the Web of Things, with hyperlinks between objects.
It is a ubiquitous Web.
It aims to unite intelligences so that people and things can communicate with each other and make decisions.
Ray Kurzweil predicts that by 2029 WebOS will parallel the human brain.
Summary
In summary, Web 1.0 is static, Web 2.0 is participatory, Web 3.0 is intelligent, and Web 4.0 is the operating system (WebOS).
MENTAL is the WebOS, where human mind and artificial mind converge through the archetypes of consciousness.
Cloud Computing
A computing paradigm in which all types of computing services are delivered over the Internet. The "cloud" is a metaphor for the Internet and is meant to represent that the internal complexity is hidden, offering only the external or superficial part, which is simple to use because it does not require technical knowledge. The advantages are:
Security. All information is stored on Internet servers, which guarantees its security.
Access. The services can be accessed from any device with an Internet connection.
Economy. It avoids having to install its own infrastructure. You only pay for the consumption made. The energy consumed is only that which is necessary.
Simplicity. Complexity is hidden. Only the simplicity of use is shown.
Updating. Updates from Internet servers are valid for all users and all devices.
The disadvantage is the dependence on the service provider.
There are different types of services: Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS). Examples of SaaS are: Google Apps, Salesforce.com and Microsoft Office 365.
Bibliography
Alesso, H. Peter; Smith, Craig F. Thinking on the Web. Berners-Lee, Gödel and Turing. Wiley, 2008.
Berners-Lee, Tim; Hendler, James; Lassila, Ora. La Red Semántica. Investigación y Ciencia, Julio 2001, pp. 38-47.
Berners-Lee, Tim. What the Semantic Web can Represent. Internet.
Brin, David. The Transparent Society: Will Technology Force Us To Choose Between Privacy And Freedom? Basic Books, 1999.
Castronova, Edward. Synthetic Worlds: The Business and Culture of Online Games. University Of Chicago Press, 2006.
Daconta, Michael C.; Obrst, Leo J.; Smith, Kevin T. The Semantic Web. A Guide to the Future of XML, Services, and Knowledge Management. Wiley, 2003. Disponible en Internet.
Feigenbaum, Lee; Herman, Ivan; Hongsermeier, Tonya; Neumann, Eric; Stephens, Susie. La Red Semántica en acción. Investigación y Ciencia, Febrero 2008.
Fensel, Hendler, Liberman, and Wahlster, editors. Spinning the Semantic Web. Massachusetts Institute of Technlogy, 2003.
Flew, Terry. New Media: an introduction. Oxford University Press, Australia & New Zealand, 2008.
Greenfield, Adam. Everyware: The Dawning Age of Ubiquitous Computing. New Riders Publishing, 2006.
Heflin, J. y Hendler, J. Semantic Interoperability on the Web. En: Proc. of Extreme Markup Languages 2000 (Graphic Communications Association, Alexandria, VA, 2000) 111-120.
Keen, Andrew. The Cult of the Amateur: How Today's Internet Is Killing Our Culture. Doubleday/Currency, 2007.
Martin; Martin. XML. Prentice Hall.
Shadbolt, Nigel; Berners-Lee, Tim. La Ciencia de la Red. Investigación y Ciencia, Diciembre, 2008, pp. 48-54.
Siegel, David. Pull: The Power of the Semantic Web to Transform Your Business. Portfolio, 2009.
Smart, John; Cascio, Jamais; Paffendorf, Jerry. Metaverse Roadmap. Pathway to the 3D . Internet.
Sunstein, Cass R. Infotopia: How Many Minds Produce Knowledge. Oxford University Press, USA, 2008.
Tapscott, Don; Williams, Anthony D. Wikinomics. La nueva economía de las multitudes inteligentes. Paidós Ibérica, 2011.