“This starts,” he said, “from an effective and concrete commitment to introduce ever greater and proper human control. No machine should ever choose to take the life of a human being.”
He warned that the good use of advanced forms of artificial intelligence will not remain fully under the control of its users or original designers, given that in the future, AI programs will even be able to communicate directly with one another to improve performance.
After an already full morning, including audiences with the president of Cape Verde and more than 100 comedians from around the world, Pope Francis flew by helicopter to Borgo Egnazia, the luxury resort where the G7 meeting is being held.
Pope Francis will arrive back at the Vatican around 9 p.m. local time after a helicopter ride of about an hour and a half.
The Vatican has been heavily involved in the conversation on artificial intelligence ethics, hosting high-level discussions with scientists and tech executives on the ethics of artificial intelligence in 2016 and 2020.
In his remarks at the G7 on Friday, Francis also highlighted some specific limitations of AI, including the ability to predict human behavior.
He described the use of artificial intelligence in the judicial system to analyze data about a prisoner’s ethnicity, type of offense, behavior in prison, and more to judge their suitability for house arrest over imprisonment.
“Human beings are always developing and are capable of surprising us by their actions. This is something that a machine cannot take into account,” he said.
He criticized “generative artificial intelligence,” which he said can be especially appealing to students today, who may even use it to compose papers.
“Yet they forget that, strictly speaking, so-called generative artificial intelligence is not really ‘generative.’ Instead, it searches big data for information and puts it together in the style required of it. It does not develop new analyses or concepts but repeats those that it finds, giving them an appealing form,” the pontiff said.
(Story continues below)
Subscribe to our daily newsletter
“Then, the more it finds a repeated notion or hypothesis, the more it considers it legitimate and valid. Rather than being ‘generative,’ then, it is instead ‘reinforcing’ in the sense that it rearranges existing content, helping to consolidate it, often without checking whether it contains errors or preconceptions.”
This runs the risk of undermining culture and the educational process by reinforcing “fake news” or a dominant narrative, he continued, noting that “education should provide students with the possibility of authentic reflection, yet it runs the risk of being reduced to a repetition of notions, which will increasingly be evaluated as unobjectionable, simply because of their constant repetition.”
He also pointed out the increasing use of AI programs, like chatbots, that interact directly with people in ways that can even be pleasant and reassuring, since they are designed to respond to the psychological needs of human beings.
“It is a frequent and serious mistake to forget that artificial intelligence is not another human being,” he underlined.
Credit: Source link