Artificial intelligence is at the heart of the current transformation of working methods, ways of relating to others and mentality, characterized by speed and technical complexity. This dossier aims to help us understand its various aspects and its repercussions, including its ethical implications, with the help of professional experts and the reflections offered by Pope Francis on these developments.
Almost from the beginning of the history of computing, computers were programmed to act intelligently. In 1956, Herbert Gelernter of IBM's Poughkeepsie Laboratory built a program capable of solving plane geometry theorems, one of the earliest examples of artificial intelligence. That same year, John McCarthy and other computer science pioneers met at a seminar at Dartmouth College in Hanover (USA). After naming the new discipline (artificial intelligence) predicted that within a decade there would be programs capable of translating between two human languages and playing chess better than the world champion. Then machines with intelligence equal or superior to ours would be built, and we would enter a new path in human evolution. The old dream of building artificial men would have come true.
But things did not happen as those optimists predicted. Although IBM's Arthur Samuel built a program for playing checkers that stored information about the progress of the games and used it to modify his future moves (i.e., it learned), chess turned out to be a much more difficult goal. The goal of beating the world champion was more than 30 years behind schedule.
The translation of texts between two natural languages also proved to be more difficult than expected. Our languages are ambiguous, because the same word can have several meanings, which are often different in different languages, and furthermore, in the same sentence, a word can play several syntactic roles.
The failure of the experts' predictions discouraged artificial intelligence researchers, who in large numbers turned to other research. Moreover, in 1969, Marvin Minski and Seymour Papert demonstrated that the one- or two-layer artificial neural networks, which had been under investigation since the 1950s, are not capable of solving very simple problems.
During the 1970s, interest in artificial intelligence was renewed by expert systems. Once again, bells were rung and overly ambitious immediate breakthroughs were predicted. The government of Japan, for example, launched at the end of that decade the fifth generation projectwhose objective was to develop in ten years (always in ten years) machines capable of think to communicate with us in our language, and to translate texts written in English and Japanese.
Frightened by the project, the United States and the European Union launched their own research programs, with less ambitious goals. The Americans focused their efforts on military programs, such as the Strategic Computing Initiative (SCI), which focused on building autonomous pilotless vehicles on the ground and in the air; "smart" weapons; and the project nicknamed Star Warswhich was to protect the United States from nuclear attack. Europe, on the other hand, focused on the problem of machine translation with the project Eurotra.
In the early 1990s, the Japanese project ended in a resounding failure. The U.S. military program was more successful, as seen during the second Iraq war. And although the project Star Wars was never launched, its announcement put pressure on the Soviet Union, which is why some analysts believe that it was one of the causes of the end of the Cold War. As for the project EurotraThe development of an autonomous machine translation system did not lead to the creation of an autonomous machine translation system, but to the construction of tools to help human translators to increase their productivity, along the lines of Google Translate.
In 1997, 30 years later than expected, a computer finally managed to beat the world chess champion (Garri Kasparov) in a six-game tournament. The driving of automated vehicles (cars and airplanes) has also made great progress. It is therefore being said more and more frequently that we are on the verge of achieving the true artificial intelligenceIs it possible, and is it really as close as some experts (not many) and the media seem to believe?
Definition of artificial intelligence
Researchers do not always agree on the definition of this branch of computer science, so it is not easy to clearly distinguish the disciplines and applications that belong to this field. Lately, it has become fashionable to use the term "computer science". artificial intelligence to refer to any computer application, so its delimitation is increasingly blurred and confusing. A system of public street benches incorporating a wifi repeater and a solar panel that provides energy to charge a cell phone has been presented as artificial intelligence. Where is the intelligence? If anything, in the human being who came up with the idea of assembling these devices.
The most widespread definition of the field of artificial intelligence is this: a set of techniques that try to solve problems related to symbolic information processing, using heuristic methods..
An artificial intelligence application should meet the following three conditions: a) the information to be processed should be of a symbolic nature; b) the problem to be solved should be non-trivial; c) the most practical way of approaching the problem should be to use heuristic rules (based on experience). The program should be able to extract these heuristic rules from its own experience, i.e. it must be able to learn.
Artificial intelligence applications
In addition to designing champions for games that are generally considered to be the most important smartthere are many more applications of artificial intelligence. In some, the results have been spectacular and are close to what we intuitively understand by a thinking machine.
There are many topics in which it has been possible to apply artificial intelligence techniques, to the point that this field resembles a small drawer. Let's take a look at some of them:
-Intelligent games. In 1997, the program Deep Blue (a dedicated IBM machine) beat the world champion, then Garri Kasparov. Currently the best program is AlphaZeroof the company DeepMind (owned by Google), which is not based on rules introduced by humans, but on self-learning (he played five million games against himself). Other successfully solved games are backgammon (backgammon), the ladies, Jeopardy!certain forms of poker, and Go.
-Logical reasoning. There are three kinds of logical reasoning: deductive (essential in mathematics), inductive (used by the experimental sciences) and abductive (used mainly in the human sciences, history and some branches of biology, such as paleontology). The problem of programming computers to perform logical deductions can be considered solved. On the other hand, it is much more difficult to program inductive or abductive reasoning processes, so this field of research in artificial intelligence remains open.
-Spoken word process. The aim is to make computers understand the human voice, so that it is possible to give them commands in a more natural way, without having to use a keyboard. Research in this field has encountered difficulties due to the fact that each person has his or her own way of pronouncing and that spoken language is even more ambiguous than written language, but much progress has been made recently, and often more than 90 percent of words are understood.
-Written text processing. It is subdivided into two main areas: natural language processing and machine translation.
A relatively recent field is the field of data miningwhose objective is to extract information from written texts and try to understand their meaning. For this purpose, statistical methods are used and annotated corpora are built with information about the terms. By using them, the programs improve or accelerate the understanding of the texts to be interpreted.
In the field of machine translation, the problems multiply, since the programs have to deal with two natural languages instead of one, both plagued by ambiguities and irregularities, which, moreover, do not usually coincide with each other. Usually the aim of these programs is to produce an approximate (not perfect) translation of the source texts, on which a human translator can work to improve it, thus considerably increasing its performance.
-Automatic image and vehicle processing. When we observe a scene through sight, we are able to interpret the information we receive and identify independent objects. This field of research aims to program machines and robots to visually recognize the elements with which they are to interact. One of its most spectacular applications is the automatic car. This project, which is currently well advanced by several companies, aims to build driverless vehicles that can travel on the roads and streets of a city. This research, which began at the University Carnegie Mellon The driverless cars were first introduced in the late 1980s and received a major boost in the 1990s, when a driverless car first took to the German autobahns. So far in the 21st century, research in the field of driverless cars has continued to make progress, and the time seems not far off when it will be allowed to be marketed.
-Expert systems. These are programs that perform logical deductions to apply knowledge rules provided by human experts in the subject matter to solve concrete problems.
The first attempt (a program called DENDRAL, capable of obtaining the formula of a chemical compound from its mass spectrogram) was built around 1965 at Stanford University. During the 1970s and 1980s, research in expert systems was applied in medical diagnostics, mathematics, physics, mining prospecting, genetics, automatic manufacturing, automatic computer configuration, and so on. But in the late 1980s they went into decline. Although they have not completely disappeared, today they do not play a major role in artificial intelligence research.
-Artificial neural networks. It is one of the oldest applications of artificial intelligence, and also one of the most widely used today. The neurons that make up these networks are very simplified, compared to those that are part of the human nervous system and of many animals. These networks are capable of solving very complex problems in a very short time, although the solution obtained is usually not optimal, but only an approximation, which is often sufficient for our needs. Currently, neural networks are used in many machine learning applications, such as the automatic translators mentioned above.
-Cognitive computing and knowledge bases about the world. One of the problems that have hindered research in artificial intelligence has been the fact that computers hardly possess any knowledge about the world around us, which puts them at an obvious disadvantage with respect to any human being, who does possess this information, having acquired it since childhood, and can use it to solve common sense problems that seem trivial, but which are extremely difficult to solve for machines that do not have the necessary information. IBM has launched a project on cognitive computing whose goal is to build programs that, based on very abundant data (big data) and using artificial intelligence and machine learning techniques, are able to make useful predictions and inferences, and to answer questions expressed in natural language.
For the time being, these systems cannot be compared to humans, and are usually restricted to a specific field of application.
Can a machine be intelligent?
In 1950, ahead of his time, the English mathematician and chemist Alan Turing attempted to define the conditions under which it would be possible to claim that a machine is capable of thinking like us (the Turing Test). This is called strong artificial intelligenceto distinguish it from the weak artificial intelligenceThe machine does not think, of course, of all the applications we have so far.
The Turing test states that a machine will be as intelligent as man (or will be able to think) when it is able to fool a sufficient number (30 %) of human beings into believing that they are exchanging information with another human being, and not with a machine. Turing did not limit himself to simply stating the test, but predicted that it would be fulfilled in about fifty years. He was not too far wrong, for in 2014 a chatbot (a program that takes part in a conversation of chat) managed to convince the 33 % of his fellow participants, after five minutes of conversation, that he was a 13-year-old Ukrainian boy. However, some analysts don't see things so clearly. Evan Ackerman wrote: "The Turing test does not prove that a program is capable of thinking. Rather, it indicates whether a program can fool a human being. And human beings are really dumb."
Many researchers think that the Turing test is not enough to define or detect intelligence. In 1980, the philosopher John Searle tried to demonstrate this by proposing the the Chinese room. According to Searle, for a computer to be considered intelligent, in addition to the Turing test, two more things are needed: that it understands what it is writing, and that it is aware of the situation. As long as this does not happen, we cannot speak of strong artificial intelligence.
Underlying all this is a very important problem: to build a strong artificial intelligence, machines must be endowed with consciousness. But if we don't know what consciousness is, not even our own, how are we going to achieve it?
Much progress has been made in neuroscience in recent times, but we are still a long way from being able to define what consciousness is and to know where it comes from and how it works, let alone create it, let alone simulate it.
Is it possible that advances in computing will lead us in the more or less long term to create in our machines something that behaves like a superintelligence? Ray Kurzweil has been predicting it for decades for an almost immediate future that, like the horizon, is receding as we approach it.
We do not know if it will be possible, by computational means, to construct intelligences equal or superior to ours, with the capacity for self-awareness. But if artificial intelligence were practicable, we would be faced with a major problem: the "problem of containment".
Containment problem
It is the following question: is it possible to program a superintelligence in such a way that it cannot cause harm to a human being?
Essentially, the containment problem is equivalent to Isaac Asimov's first law of Robotics. Well, there are recent mathematical indications that it is not possible to solve the containment problem. If this is confirmed, we have two possibilities: a) give up on creating superintelligences, and b) give up on being sure that such superintelligences will not be able to cause us harm.
It is always risky to predict the future, but it seems clear that many of the advances that are lightly heralded as imminent are a long way off.
Professor of Computer Systems and Languages (retired)