The Vatican

Morality of AI depends on human decisions, says Vatican in new document

The Vatican warns about the ethical use of artificial intelligence, recalling that it must serve the common good and not cause harm. While recognizing its positive potential, the document urges regulation that guarantees human dignity and avoids abuses.

Cindy Wooden-January 30, 2025-Reading time: 4 minutes
IA

(OSV News). "Technological progress is part of God's plan for creation," the Vatican said, but people must take responsibility for using technologies such as artificial intelligence (AI) to help humanity and not harm individuals or groups.

"Like any tool, the IA is an extension of human power, and while its future capabilities are unpredictable, mankind's past actions provide clear warnings," says the document signed by Cardinals Víctor Manuel Fernández, Prefect of the Dicastery for the Doctrine of the Faith, and José Tolentino de Mendonça, Prefect of the Dicastery for Culture and Education.

The document, approved by Pope Francis on January 14 and made public by the Vatican on January 28 - the day after International Holocaust Remembrance Day - says that "the atrocities committed throughout history are sufficient to raise deep concern about possible abuses of AI."

Antiqua et Nova

Entitled "Antiqua et Nova (Old and New): A Note on the Relationship between Artificial Intelligence and Human Intelligence," the paper focuses especially on the moral use of technology and the impact that artificial intelligence is already having or could have on interpersonal relationships, education, work, art, health care, law, war and international relations.

AI technology is not only used in applications such as ChatGPT and search engines, but also in advertising, self-driving cars, autonomous weapons systems, security and surveillance systems, robotics in factories and data analysis, even in healthcare.

The Popes and Vatican institutions, in particular the Pontifical Academy of Sciences, have been monitoring and expressing concern about the development and use of artificial intelligence for more than 40 years.

"Like any product of human creativity, artificial intelligence can be directed toward positive or negative ends," the Vatican document states. "When used in a way that respects human dignity and promotes the well-being of individuals and communities, it can make a positive contribution to the human vocation."

Human decisions

"However, as in all areas where human beings are called to make choices, here too the shadow of evil looms," the dicasteries said. "Where human freedom allows for the possibility of choosing what is wrong, the moral evaluation of this technology must take into account how it is directed and used."

Humans, not machines, make the moral decisions, the paper said. Therefore, "it is important that ultimate responsibility for decisions made using AI rests with human decision-makers and that there is accountability for the use of AI at every stage of the decision-making process."

The Vatican document insisted that, although artificial intelligence can quickly perform some very complex tasks or access large amounts of information, it is not truly intelligent, at least not in the same way that humans are.

"A proper understanding of human intelligence cannot be reduced to the mere acquisition of facts or the ability to perform specific tasks. On the contrary, it implies a person's openness to the ultimate questions of life and reflects an orientation toward the true and the good."

The specifically human

Human intelligence also involves listening to others, empathizing with them, building relationships and making moral judgments, actions that even the most sophisticated AI programs cannot perform, he says.

"Between a machine and a human being, only the human being can be sufficiently self-aware to the point of listening to and following the voice of conscience, discerning with prudence and seeking the good that is possible in each situation," the document said.

The Vatican dicasteries issued several warnings or caveats in the document, calling on individual users, developers and even governments to exercise control over how AI is used and to commit "to ensure that AI always supports and promotes the supreme value of the dignity of every human being and the fullness of the human vocation."

First, they noted, "impersonating AI should always be avoided; doing so for fraudulent purposes is a serious ethical violation that could erode social trust. Similarly, using AI to deceive in other contexts - such as in education or in human relationships, including the sphere of sexuality - should also be considered immoral and requires careful oversight to avoid harm, maintain transparency, and ensure the dignity of all individuals."

New discriminations

The dicasteries warned that "AI could be used to perpetuate marginalization and discrimination, create new forms of poverty, widen the 'digital divide' and worsen existing social inequalities."

While AI promises to increase productivity in the workplace by "taking over mundane tasks," according to the paper, "it often forces workers to adapt to the speed and demands of machines, rather than machines being designed to help those who work."

Parents, teachers and students should also be wary of their reliance on AI, he says, and should know their limits.

"The widespread use of AI in education could increase students' dependence on technology, impairing their ability to perform some tasks autonomously and exacerbating their dependence on screens," the paper states.

And while AI can provide information, according to the paper, it does not actually educate, which requires thinking, reasoning and discernment.

AI and disinformation

Users should also be aware of the "serious risk of AI generating manipulated content and false information, which can easily mislead people because of its resemblance to the truth." This misinformation can occur unintentionally, as in the case of AI "hallucination," where a generative AI system outputs results that appear real but are not, as it is programmed to respond to all requests for information, regardless of whether it has access to it or not.

Of course, according to the paper, AI falsehood can also "be intentional: individuals or organizations intentionally generate and disseminate false content with the aim of misleading or causing harm, such as images, videos and audio...". deepfake -referring to a false representation of a person, edited or generated by an AI algorithm".

Military applications of AI technology are of particular concern, according to the paper, because of "the ease with which autonomous weapons make warfare more viable," the potential for AI to eliminate "human oversight" of weapons deployment, and the potential for autonomous weapons to become the subject of a new "destabilizing arms race, with catastrophic consequences for human rights."

The authorCindy Wooden

OSV News

La Brújula Newsletter Leave us your email and receive every week the latest news curated with a catholic point of view.