NEW YORK — How can Catholic social teaching guide us in weighing the benefits of artificial intelligence against the dangers it poses to human dignity? That question animated a wide-ranging discussion among Catholic thinkers and technology experts at the New York Encounter on Saturday.
Citing Pope Leo XIV's call to use AI responsibly as well as the Church's historic defense of human dignity in the face of modern technology, Davide Bolchini, moderator and dean of the Luddy School of Informatics at Indiana University, opened the discussion before an audience of several hundred people gathered for the three-day cultural conference in New York City.
"The pope encouraged us to use AI responsibly, to use it in a way that helps us grow, not to let it work against us, but to let it work with us, not to substitute human intelligence, not to replace our judgment of what's right ... our ability of authentic wonder," Bolchini said.
With technology rapidly advancing, Bolchini asked, how can the Church stay ahead of these challenges?
Chuck Rossi, an engineer at Meta who is developing AI-driven content moderation technology at the technology conglomerate, which includes Facebook and Instagram, argued that in his work, developments in AI have been instrumental in safeguarding human beings from harm.
AI systems, he said, can examine 2.5 billion pieces of of shared online content per hour, filtering harmful material including nudity and sexual activity, bullying and harassment, child endanger, dangerous organizations, fake accounts, hateful conduct, restricted goods and services, spam, suicide and self-injury, violence and incitement, and violent and graphic content.
"That's my world," he said. "It's a very, very hard problem. If we miss 0.1% of 2. 5 billion, that's millions of things that we didn't want to be seeing. But we do an excellent job, and we have for years — we're one of the best at it," Rossi said.
Using AI also protects human content moderators from being exposed to disturbing material, as they were in the past.
"The good thing that we are giving back to humans is you never have to do this horrible work," he said.
Paul Scherz, professor of theology at the University of Notre Dame, acknowledged the benefits of AI, which he said included advances in medicine and efficiency for tasks like billing ("Nobody wants to do billing," he said).
But Scherz warned of the dangers of relying on technology to do what is intrinsically human.
"We are really starting to turn to AI as people more broadly for these relational aspects, which would be tragic because there is something in that human-to-human connection, the 'I/thou connection,' as Martin Buber called it, that is irreplaceable by a machine," Scherz said. He noted that AI has even moved into ministry, with the rise of Catholic apps relying on bots to offer catechesis.
Scherz also cautioned that substituting AI for human interaction and intelligence risks eroding our skills, whether in relationships or in professional life.
"My fear is as we use these chatbots more and more we will lose those person-to-person skills. We'll no longer be able to engage one another as well, or have the patience and virtue to deeply love and encounter one another," Scherz said.
In addition, relying on AI in our work, for example, when a doctor consults AI to make a diagnosis, will result in our "de-skilling," he said.
"We know that people, when they're using automated systems, they tend to just become biased and complacent and just approve the automated system. They lose their skills," he said, adding that airline pilots who rely too much on autopilot are more prone to making errors.
Louis Kim, former vice president of personal systems and AI at Hewlett-Packard who is currently pursuing graduate studies in theology and health care, pointed out that it's not possible to know today what skills will be required in the future.
"My personal view is I often find that predictions of impacted technology are largely unconsciously based on what we know of the current paradigm and structure and technologies," Kim said.
"There are going to be skills needed to control AI that are going to be different," he said.
Kim also called for "humility" in discussions about AI's potential to affect human relationships.
"Let's ask ourselves about the quality of our current human relationships, whether it's in the workplace, in toxic cultures, sometimes at home — even at conferences, at your next break, as you go around talking to this person [or] that person, how many times that person is looking over your shoulder for the more important person to talk to?" he said.
Our moral formation, he said, will continue to shape the quality of our encounters with others.

