"Artificial Intelligence and Peace". The theme chosen by Pope Francis for the World Day of Peace The date of January 1, 2024 contains three words that have become more topical than ever in the past year. Since the world learned ChatGPT in November 2022, the term artificial intelligence has not only become familiar to all, but has entered (sometimes returned) to be part of ethical reflections, conferences, articles and analyses.
After years of digital being considered "just for experts," we have all come to realize how profoundly it affects everyone's lives. However, peace cannot be talked about enough. Because in the world, as Pope Francis has repeatedly reminded us, the third world war has long been raging in pieces. And in particular two of its pieces, Ukraine and the Middle East, we in Europe feel very close to it.
Obviously - and not by chance - Pope Francis wanted to bring artificial intelligence and peace together to point out a very real danger: "new technologies are endowed with disruptive potential and ambivalent effects". We have all become aware of this by now, especially in the last year: "We are all aware of this, especially in the last year.The remarkable progress made in the field of artificial intelligence is having an increasingly profound impact on human activity, personal and social life, politics and the economy."
Not everyone understands this, but what is happening in the digital world is a double challenge: on the one hand, economic and power (whoever manages the large Artificial Intelligence systems will in fact manage important parts of the world), and on the other hand, cultural, social and anthropological. Whoever creates an Artificial Intelligence system knows very well that one of the things he must try to avoid is to train the machines with his own preconceived ideas, not only cultural ones.
Already today there are systems that distort reality and cause "the logic of violence and discrimination to take root (...) at the expense of the most fragile and excluded". If we think about it, the world needs the use of artificial intelligences to be done in a responsible way, "so that they may be at the service of humanity and the protection of our common home (...). The protection of the dignity of the person and the care of a fraternity truly open to the whole human family are indispensable conditions for technological development to contribute to the promotion of justice and peace in the world".
It is impossible not to agree with the words of Pope Francis, but it is equally impossible, after reading them, not to ask: what can I do in my own small way to make them fruitful? Not all of us are experts in these matters. And not all of us can make ourselves heard by those who have to make decisions about them. Moreover, it is not uncommon for many to feel so far removed from these things that they delegate "to the experts" every reasoning, every decision, every word on such complex issues.
From this point of view, we Europeans are luckier than other peoples. After more than 36 hours of negotiations, on December 9, the European Commission, the Council of the European Union and the Parliament reached an agreement on the text of the so-called "European Union Convention". AI Actthe European law on artificial intelligence. It is the world's first regulatory framework on Artificial Intelligence systems.
The first objective is to ensure that artificial intelligence systems marketed in Europe and used in the EU are safe and respect EU fundamental rights and values. To this end, a system has been devised that divides AI systems according to their risk. The maximum refers to AI systems operating in public utility and neuralgic sectors such as water, gas, electricity, healthcare, access to education, law enforcement, border control, administration of justice and democratic processes, as well as procurement.
Biometric systems for identification, categorization and emotion recognition are also considered high risk. What Europe has done is an important step and one that will guide (at least in part) the regulation being discussed by other major powers such as the U.S. All good, then? Yes and no. Because it is true that this is one of the right paths to follow in the approach to Artificial Intelligence, but it is no less true that other realities of the world, Eastern, Russian and African, above all, seem determined to get out of these rules.
Because, as we have written, it is an economic challenge (which already moves billions of dollars) but also - and above all - one of power. Because beyond the success of chatbots as ChatGPT, there are already three thousand systems in our lives that use artificial intelligence and are governing and, in some cases, directing it. In the words of sociologist Derrick de Kerckhove, one of the world's leading experts on digital culture and new media, "AI is powerful and effective in so many fields, from medicine to finance, from law to war. It overcomes the human with the algorithm and creates a radical separation between the power of human speech and the power of speech made of sequences of calculations."
In short, the use of Artificial Intelligence is changing us. It changes the way we move (we are getting lazier and looking for easy shortcuts) and to some extent even our reasoning. It pushes us towards a binary system, of 0s and 1s, of blacks and whites and opposites, gradually eliminating all the shades in between.
Not to mention how Artificial Intelligence can push us in a certain direction by exploiting our cognitive biases. And here the Pope's words come back with a vengeance: "new technologies are endowed with a disruptive potential and ambivalent effects". With Artificial Intelligence, announced Bill Gates, "we will be able to defeat hunger in the world"In many hospitals, including Italian hospitals, it is already being used to better understand certain diseases in order to treat and prevent them more effectively.
Positive examples are numerous and affect almost every field. Even in the Catholic sphere, there are those who have tried to educate ChatGPt so that they can create valuable homilies. The result, in the latter case, has been little more than sufficient but good enough to scandalize some priests and make some of the faithful reflect on how many Sunday homilies are, unfortunately, no better than those of ChatGPT.
It is true that we are talking about machines, but those who train, think and create them, and those who interact with them, through commands (the so-called prompts), are people.
In the end, there are two small truths that we must always keep in mind when we read and talk about artificial intelligence. The first is that things change so fast in this field that each time what we write runs the risk, at least in part, of being overtaken by the facts. The second is that each of us, even those who admit to knowing very little, approaches the subject with our own idea in mind.
A preconceived idea that is also the result of the books we have read, the movies and TV series we have seen: from Asimov's novels to the reflections of Luciano Floridi, from 2001: A Space Odyssey and Terminator to the latest episodes of Black Mirror. And each time, our greatest fear is always the same: becoming slaves to machines and/or becoming like machines, giving up our humanity in either case.
After all, if the world did not discover the existence of artificial intelligence until November 2022, we owe it to the fact that the advent of ChatGPT has shown us the existence of a machine that does (although it would be better to say: tricks us into doing) things that until recently were the prerogative of men alone. Namely, writing, drawing, creating art and dialoguing. That's why, every time ChatGPT or another AI makes a mistake, we get a smile on our face and take a deep breath. It's a sign that, for a while yet, we'll be safe.
On the other side, there are already those who are creating weaponry commanded by artificial intelligence. Authentic war machines that only know how to kill and have no guilt. Even more: precisely because they seem to act autonomously, they erase the feeling of guilt in those who created them and in those who put them on the battlefield. As if to say: it was not me who killed, it was the machine. Therefore, the fault is theirs alone.
No one knows exactly what the future holds, but not a day goes by without ominous-sounding announcements. One of the latest concerns the Agi, or artificial general intelligence. It is the next evolution of artificial intelligence. According to Masayoshi Son, CEO of SoftBank and a leading technology expert, "will arrive in ten years and will be at least ten times more intelligent than the sum total of all human intelligence." The confirmation also seems to come from Open AI, creator of ChatGPT.
The company has announced that it is forming a team dedicated to managing the risks associated with the possible development of an artificial intelligence capable of crossing the threshold of what is acceptable and becoming "superintelligent". If you think these frontiers are science fiction, you should know that a group of scientists at John Hopkins University have asked themselves: what if instead of trying to make artificial intelligence look like human intelligence, we were to do the opposite, i.e. transform parts of the human brain as the basis for the computers of the future?
This technique is called organoid intelligence (IoT) and uses three-dimensional cultures of neural cells obtained in the laboratory from stem cells. Because while it is true that artificial intelligences process data and numbers much faster than humans, our brains are still far superior when it comes to making complex, logic-based decisions.
And here we return to the question posed many lines ago: what can each of us do in the face of all this? First of all, we should be aware that the citizen of the 2000s and the Christian of the 2000s should be interested in these changes. Without alarmism, but with the awareness that we are facing epochal changes.
Journalist for "Avvenire"