Is anyone else starting to get some Terminator vibes from the whole ChatGPT thing? Am I the only one who hears in their head an Austrian accent when reading back what it spits out?
I’m kidding, of course....kind of :).
I admit to sometimes asking, "What can this thing really do?"
Because while I don’t think Artificial Intelligence is quite to the point where we need to start worrying about robot overlords, there are definitely some legitimate concerns when it comes to the ethics behind this new technology.
So how should Catholics respond to AI?
That's the topic of my latest Art of Catholic podcast.
And to discuss some of the issues facing us, I invited onto the show my good friend Dr. Dan Kuebler, professor of Biology at Franciscan University of Steubenville and Project Co-Lead for the Purposeful Universe Project.
It's a conversation that covers topics such as:
- The biggest ethical issues involved in using AI
- The role of death in driving rapid AI development
- The crucial differences between AI-generated art vs that created by humans
- The key reason that a relationship with an AI bot fundamentally differs from interpersonal relationships
- The bottom line to a fully Catholic response to AI
With AI technology exploding in popularity, this podcast episode is definitely one you don't want to miss!
God bless and enjoy!
Matthew
P.S. Have you tried the Science of Sainthood for FREE? Check it out HERE!
PODCAST TRANSCRIPT
Matthew Leonard:
Welcome to the Art of Catholic Podcast. I'm Matthew Leonard. And listen, talk about artificial intelligence is everywhere. Everybody and their brother are playing around with AI platforms like ChatGPT, companies are scrambling to see how best to integrate it. Students are licking their chops at the possibilities of not having to write term papers anymore. I even once asked it to write a country song, and it did a fairly decent job. You know, the dog was in it, the truck, and all the rest. But there are some serious and perhaps troubling questions that continue to swirl around the breathtakingly fast growth of AI to the point where industry icons at one point were calling for a moratorium on growth until some questions could be answered. So for our purposes, how is a Catholic supposed to respond to this technology, or technology in general for that matter?
I mean, is it contrary to our understanding of the human person? Does it devalue human creativity? Is this the beginning of the AI takeover of all humanity? So these and other questions surround the issue. And so in order to address it all I invited a very good friend of mine, Dr. Dan Keebler, to come on the out of Catholic and help us to answer some of these questions. And here's the tale of tape on Dan. He is a professor of biology at Franciscan University of Steubenville. He's the project co-lead for the Purposeful Universe, which is a Templeton funded grant that focuses on examining the order and purpose inherent in the world around us. That sounds big. The Purposeful Universe is gonna be launching a podcast in July, 2023, and Dan can tell us about that in a second. But in addition to numerous articles, Dan is the co-author of The Evolution Controversy, A Survey of Competing Theories, which was published by Baker Academic, and he's got another forthcoming book coming out as well. He's a member of the Board of Directors of the Society of Catholic Scientists, so he is really smart, and he is on the board of the Magis Center, and he lives with his beautiful family in Steubenville, Ohio. I didn't say beautiful Steubenville, Ohio <laugh>, but it's Steubenville, Ohio. So, Dan, it's great to have you on, and you're, you're a great friend and you're a really smart guy, so I'm very pleased to have you on the show.
Dr. Dan Kuebler:
Oh, it's great to be on, Matt, and I, I love listening to you and I love all the discussions we have about topics like this after, after church, you know, and <laugh>.
Matthew Leonard:
Yeah. This is basically just a continuation of our post Mass conversations.
Dr. Dan Kuebler:
Exactly. Where are the donuts? <Laugh>
Matthew Leonard:
Well, I can't eat those. I have this gluten thing going. But we will, we're gonna dive into this, and you're the perfect person to do this, but before we jump into our discussion about the AI, tell us about what's going on with the Purposeful Universe Project.
Dr. Dan Kuebler:
Yeah, the Purposeful Universe Project. It, it, it's a, a, a project we started about two years ago, and, and it's basically trying to address and, and combat this idea that the more you understand nature and the cosmos, the more likely you are to be drawn away from belief in God. Right? And that if you look and study the cosmos and science, that underneath every level of physics, chemistry, biology, there's this underlying order and beauty there that suggests a purpose. And you know, if you understand consciousness, there's all these deep questions that point towards there being some type of meaning in, in, in the cosmos. So we're trying to say, Hey, look, let's, let's look at the best of science that's out there. And if you look at it in a very much sort of straightforward, unbiased way, it actually augments what we know from our Catholic faith. And, and that's what we, we, we are trying to look at with through the purpose of a universe, bringing in scientists, bringing in philosophers, theologians to talk about these things. And so you know, we'll be launching a podcast shortly, and we'll be bringing a lot of guests to, to look at these interesting issues from psychology to theology to astrophysics.
Matthew Leonard:
So I'm not gonna be invited onto that show, I guess <laugh>, that is not my, yeah. <Laugh>, I'm a, like astro boy instead of astrophysics. So that's gonna happen just for those people who are watching this later, that's gonna launch sometime like July, 2023, is that what we're talking about?
Dr. Dan Kuebler:
That's right. July, 2023.
Matthew Leonard:
Okay. so let's jump into the whole topic of AI. And, and let's start, because a lot of people have heard about AI and they're not really sure what all the hubbub is about. So what is it? Just kind of give us a quick overview of what it is and how we got here for crying out loud.
Dr. Dan Kuebler:
Yeah. There's been a lot of work on artificial intelligence, AI, you know, over the past couple decades to be able to do and mimic, you know, human behavior. You know, you have these chess playing algorithms, you know, that the, that that can beat humans in chess and so forth. But what really got things going is, as most people are aware, is when ChatGPT launched earlier this year and these, what are called large language modules, these, these you know, Google jumped out with the, its its version barred. And so there's this AI arms race now <laugh>, but it, it, it, it really surprised people, I think how good these large language models were, I think they were surprised at how well it answered the, the questions that you could pose to it. But a lot of it is, it's, it's a function of just faster and faster computing power, more advanced algorithms. But it, it just, it, it, what they do is they just can go through a whole, you know, massive piles of data and extract patterns from that and write responses based on, you know, searching large amounts of data, which is not the way we write essays or we answer questions. You know, it's just sort of brute force. I'm going to just go through all the data on the internet and come up with the best answer <laugh>, although that's more complicated than that. But that's sort of in a nutshell,
Matthew Leonard:
Well, it would seem to speak to some of the weakness that's intrinsic to this then, because the ChatGPT and the rest of these things are only gonna be as good as the data that they are drawing from. And I guess this is, this is one of the things that I've thought about before. I, I've even done a search for various theology books where I hear about a particular writer, and I wanna know what other books they have, and it's given me a list, and the list is totally wrong, <laugh>, like, I can't find anything that is they're talking about. And you know, sometimes it, it does right things, sometimes it does wrong things, but it's only as good as the data that's put into it.
Dr. Dan Kuebler:
Right? And that's the biggest issue I think from a, a Catholic perspective with these AI search platforms. And where you're asking it questions is to be aware, you know, just like, you know, when you do an internet search, right? You're not just getting an unbiased search. You know, those are the companies that are running the search instance have ranked things. And so you're getting, and there's ads, so you're getting certain things that they want you to see. And so when you have one of these large language modules, you, you can't just throw any data at it, let it look at any data. You have to go in and say, Hey, we don't want this data because it's junk. So somebody's making decisions about what data you give it. So somebody can say, Hey, look, we don't want anything that is, you know any data on the internet that you know, is dealing with child pornography. One, you know, if we don't want that for certain, right, which is a great thing. So you filter that out so it doesn't start giving you, you know, answers or, or, or pictures or things that you don't want. But at the same time, somebody said, look, I don't want anything that is religious. You know, we don't want any, you know, Catholic answers here. And so, you know, if you ask it a, a question, you are getting a filtered, biased response based on the data that the algorithm is allowed to go look at. Right? And so that's the, the concern. And, and if you do these, you know, you type in a question, a lot of times the answers are sort of equivocal. It doesn't give you a full answer. It sort of gives you both sides. It, it doesn't do a very good job of answering, at this point, at least sort of in depth questions or making valued judgements that haven't been put in there by the people that are making the, the algorithms.
Matthew Leonard:
So what about you as a professor dealing with students? Like you know, there're all, it seems to be that there's this quagmire of issues that goes along with this, not just is the student actually doing the research if you just type it into ChatGPT, and it comes flying out, right? But also, what about copyright issues and all the rest of the kind of stuff that flies into this?
Dr. Dan Kuebler:
Yeah, I mean, what you know, the, the students are, are, are giving you, now that you have this tool, you have to be, as a professor, be a little more astute about these type of assignments that you want to give them, right? Mm-Hmm. <affirmative>, there's certain assignments that ChatGPT and these, these large language models aren't gonna be good at, there's others. You just give me a, a, you know, a an overview of, you know Thomas Aquinas, right? It could give you some overview of it that that could pass as a, you know, a, a a, a mid grade introductory course essay. But if you want to, you know, ask the student to defend the thesis in a very robust way you know, it's not gonna be able to do that. So you have to be a lot more particular about what you're asking students to do. And and, and ask them to often, you know, make value judgements, which this thing, do these things, don't do a very good job, and then justify those, right? With, with specific examples and ChatGPT can do that to some extent, but usually it's a lot of circular reasoning that it has, and it's very equivocal. And so at least at this point in time,
Matthew Leonard:
So the Blue Books might be coming back at some point where people actually writing out their answers in the classroom,
Dr. Dan Kuebler:
<Laugh> in the classroom. Yeah. Yeah. We got rid of those. I was like, in the sciences here, for instance, I think it was the last person to use to use those, and they finally just stopped buying 'em because <laugh> we're accumulating in the office. We've got mounds of these blue books. So those companies that if you've got stock in those companies, maybe they'll come back here now.
Matthew Leonard:
Well, what are, what are some of the other big ethical issues that, that come into play with regard to AI or implementing AI to do particular tasks? Are there some that are kind of at the forefront with regard to ethics?
Dr. Dan Kuebler:
Yeah, I think the, the biggest concern with AI is if we start to automate certain tasks, which I think there's that, that's gonna happen. You're gonna automate certain tasks with ai, that there's gonna be an inherent bias in, in the AI algorithms and the data. That, that, I think that's the biggest biggest concern When you have sort of a, these AI algorithms you can think of, maybe you have like a, a, a hotline, you know, that you call in for, for counseling help, like, you know, tele counseling where an AI algorithm would be helping you through this. And you say, you know what? I you know, I'm feeling that, you know, I, I'm questioning my gender identity and so forth. And the algorithm, you know, how's it gonna respond, right? It's gonna respond based on whoever is programmed that in, based on whatever the company that's built this stinks the right answer should be. And so, you know, you, you know, AI is going to be a useful tool, but in terms of you know curating biases that are inherent in, in any specific AI program is, is gonna be clearly key cuz we're not gonna be able to look at, oh, they gave me the proper answer.
Matthew Leonard:
Are there any Catholics that you know of or anybody in the, the Christian community that's working on a Catholic version of this, like, to kind of filter out so we we know who's pumping the information into it?
Dr. Dan Kuebler:
Yeah, no, I don't. We, we just, as Society of Catholic scientists, we just had our annual meeting a couple weeks ago and had this guy from university of Wisconsin who was gave a talk on ai. And that's one of the things after the, after the meeting, we were talking about that, and he said, this is exactly what we, we need to do. We need to have sort of a, a Catholic large language model where we have Orthodox Catholics curating the data. And so we have, you know a place that, that the Catholics could be a very useful resource, you know, so someone say, Hey, what's the church teach on this? You know, they, I'm, I'm, I'm concerned about this. How do I argue about this? You know? And if you have a Catholic large language model that's been curated and, and, and it, it could be a great help in, in terms of evangelization or for people just to know how to answer questions or deal with problems that, that they've, they, they've they've had I think that would be, you know, I, I I think there would, that, that building that, you know, that requires some, some resources and some money.
Matthew Leonard:
Yeah, it sure does. And, and it still raises problems because it's not like the church is one giant camp, right? There are all these, because there's, there's room for all kinds of different perspectives inside the church, right? And so let's say for example, you want a question answer with with regard to theology, but it's not something that's right in the catechism in black and white. You'd still have to know where, what your sources are, because people have a different bent and there's, there's legitimate argument about various things. And so I, I would just kind of warn people of that upfront. Don't just take what's spit out just because something says that it's Catholic and I'm talking about it in the future, right? Yeah. So we still have to use some discernment here,
Dr. Dan Kuebler:
Right? And that's what the, you know, the, the AI algorithms aren't able to do is to, you know, they just have that static data, right? That they can, and you can keep feeding into more data, and they can add more data. But in terms of ability for you or I to look sick, oh, you know, that reminds me of, you know, this interpersonal relationship I have with X who's told me this, and I know my priest has told me this and so forth. And we think about, and we, we, we make judgements over who's a reliable source based on things that an AI algorithm doesn't even have access to, right? And, and, and so when we, you know, are trying to sort through, you know, who should I trust? And, and what, what sources it, it's, it's a lot more than just computating, you know, some, some type of algorithm and adding up the pluses and minuses.
There's, there's these, you know, innate value judgments and that, that, that, that we make all the time. That AI algorithms only make, they don't make 'em, they, they're told how to make those right. And they don't they don't have the ability to adapt, you know? So I can trust, you know what Matt Leonard is saying, but say tomorrow you suddenly, you know you know, go off the deep end, you know, that I can, I can recognize that Matt's losing it. I can't rely on his theology anymore. Whereas an AI algorithm, you have to have somebody tell him, don't trust Matt anymore. Right. You know, but, but
Matthew Leonard:
That would never happen to any theologians ever. <laugh>
Dr. Dan Kuebler:
No, exactly. So, so we're able to make those the day to day, we make those instantaneously all the time. And AI algorithms don't, don't do that.
Matthew Leonard:
What are some other unique human characteristics that AI would have trouble mimicking or replicating?
Dr. Dan Kuebler:
Yeah, so the, the one, the making those sort of value judgements and, and, and number two, this, this sort of, the, the conscious experiment experience, right? And three, another thing that they're never gonna have is this ability to worship and to, to recognize, to recognize the creator, right? They, because to recognize the creator requires an autonomy, a conscious free will, right? Which and rationality for us to recognize, you know, from looking at the world, looking at the things around us to recognize, hey, there's something behind and beneath beyond this. That's something that a leap in AI algorithm is never going to make on its own, right? Because it doesn't have that the conscious autonomy that you or I do to decide, you know what, today I'm just, I, you know, I'm no longer if a chat gpt never examine the freedoms.
I just don't want to respond to you. I, I'm just tired today. You know, I'm just, you know, that, that, that human spontaneity and freedom isn't there in these, these, these algorithms, these things that, that, that you and I, you know have and help orient us to, to, to God. I think the richest part of the human experience is never gonna be captured by these, cuz the richest part is the interpersonal relationships we have, not only with each other, but with God, right? And those interpersonal relationships can only occur with persons, and AI is never going to be a person.
Matthew Leonard:
So you're, you're basically saying they're, they can never make that jump which a lot, I mean, yeah, correct me if I'm wrong, but I've read articles about like, really rich guys who are talking about uploading their soul into a computer and all the rest of this kind of stuff. And it was like, what, what are these guys thinking? And, but there are people who believe that they can actually do this, right? So this, this is completely contrary to the Catholic understanding of the human person, right? And we shouldn't have fear, I think is what you're saying. Yeah. That there's this possibility,
Dr. Dan Kuebler:
Right? I, I don't think we should have fear. I mean, their, their ideas are based on the idea that your brain and my brain and our mind is nothing but a big computer just computes data and spits it out. We're not really free. We don't really have a innate sense of purpose. It's all an illusion. And that as long as I can build a computer that's fast enough, it will be able to act just like me, and it might be able to act just like me, right? If you built a fast enough computer, it might be able to mimic human behavior. And these AI algorithms do a great job of mimicking human behavior, but they, they don't behave the way humans do. And what I mean by that, you think about examples, these algorithms that, that that they use to beat in chess, right?
The way they work is they analyze thousands upon thousands of moves and then figure out what the best move is. That's how they beat one of the ways that they beat humans. Humans don't play chess like that. When you play, even a grand master doesn't analyze thousands and thousands of moves, you, we do things totally differently. When I write an essay, I don't analyze and grab, you know you know, billions of bytes of data, I make value to, I'm gonna go with this source. And then this is, then you, you put the, the essay together. We do things in a totally different way because we have this conscious experience that nobody, nobody can has ever been able explained, you know, reduce my conscious experience to a material substrate, right? Which is what these people argue that the conscious experience that you and I have, or the people that are listening right now, the conscious experience of our voices, that they're, they're, they're experience is not just simply a material computer program.
There's something more to us. And that's what the church talks about, the spirit, the human spirit, the human soul. That, that it transcends the material. Certainly we need our material being, we need our material bodies to be who we are, but we're more than that. And so, AI algorithms, they're never going to be more than material tools, right? Very, very advanced tools that could do things that humans can't do. But that, that's been going on technologically for Millennium, where we build tools that augment what we can do or do things that we can't do, right? It's a matter of how do you use those properly? And how do we you know, recognize the limits of those tools?
Matthew Leonard:
Are you looking for authentic Catholic spiritual formation that will give serious direction to your relationship with God? The Science of Sainthood is an online teaching platform that offers step-by-step beautiful video courses on prayer and the interior life based on spiritual giants like Saints, Augustine, John of the Cross, Theresa Valla, and many others. It will set your heart on fire. And while you'll certainly learn this isn't school, there are no tests, there are no papers. This is solid, dynamic spiritual formation at your pace. We offer individual courses as well as full memberships with access to 20 incredible series and counting. Want to study with a group? We can set you up interested in a parish membership. We have that too. You've never seen anything like this until now. Go to science of Saint hood.com and experience a totally free course to see what thousands of Catholics have already discovered Science of sainthood.com more than education. This is transformation.
It, it seems that given the fact that you have these giant companies who are pushing this stuff, and there's some really rich guys at the back of these companies and whatnot, not to denigrate the rich, I don't have any problem with people being rich. But when you're, when you're really wealthy and you don't have a life of faith you can see where after all this accumulation of goods, there's not just the fear of losing them. You're starting to use your wealth to figure out, how am I, how can I stay here? Right? Because I don't see anything else after this. I've achieved this level. I wanna maintain it. And I have to wonder at some level, Dan, if there's a, like the, the rapid growth of this AI technology is being driven by fear on some level. Like, these guys are putting gazillions of dollars into this stuff because they're trying to find a way that they can perpetuate their life because they don't have any belief in the hereafter.
Dr. Dan Kuebler:
Yeah. I think that that's right. Not only do they want to, I, I think there's this, this desire to perpetuate your life, you know, because you don't have this belief in the hereafter, but they also recognize, and just like all of us that we're flawed, there's, you know, we have limitations, right? Because we are creatures. But you know, they, they, they think that technologically we can overcome our limitations. Matter of fact, hook my brain up to a computer. I, I, you know, or, or, or, you know, I'm never gonna forget anything again. I'm going to, you know, be able to, you know, live in a computer chip. So there's this idea of how do we transcend or fix humanity fix our sort of fallen nature. So there's a, a tight link between like this transhumanism idea that we can create things beyond humans and ai.
So if it's true that, you know, you can actually create a conscious thing in a computer chip, well, maybe I can augment my brain and become a totally different thing. I can hook myself up to a computer, and now I'm a, like a, a human computer cyborg. I'm a different, different type of entity, right? And I don't have the limitations, the irritability, the whatever problems I'm mental problems I might have, I can fix them with artificial intelligent technology, right? So but I think that because there's this deep understanding, I think of all humans, that there's something wrong with us, right? But we're, we're, we're not quite the way we should be. And as, as Catholics, we recognize, well, we can't overcome that on our own. That's where, where Christ comes. And I think, you know, the, if you don't have that, you're limit your, your, you're your initial got responsible. I I'm gonna overcome my re my problems with technology, right? And so I think they're, they're, they're seeing some of the same limit human limitations that Catholics and Christians see, but their response, I, I think although I think a normal response is, is, is never going to be able to fix who we are.
Matthew Leonard:
Well, given the fact that there's some obviously flawed philosophy or understanding of the human person, kind of at the heart of all this, is this kind of stuff something that we should run away from? Like what, what is the church's response to this? Or what's the individual Catholic's response to it?
Dr. Dan Kuebler:
Yeah, I think the individual Catholic's response is, you know, technology is, you know, is with us. It's, it's kinda like the people that said, Hey, maybe we should put a moratorium on this. You know, which it's, it's, it's ludicrous because it's, it's, nobody's gonna stop you. You're gonna police this across the globe. Nobody's gonna, you know, it's, it's, it's worth it. This is, this is coming. People are gonna be using this. It's gonna be put in certain, you know, jobs are gonna change because of this. So as a Catholic, it's, it's understanding what this, what this tool is, and ensuring that this tool isn't used to further fragment or undermine genuine human relationships. And that's, I think the, the, the biggest problem with, with technology is that we sort of insulate ourselves in a technological bubble, and that we don't interact with other individuals.
Again, an example of this, you know, there, there's trying to make like AI chat bots, like little robots that can talk to and have conversations with people in nursing homes. They're like, oh, then first principle, it sounds great. Now they have somebody to talk to, but they're not actually having human interactions. And that leads us to say, oh, you know, we don't have to talk to those or visit those people. My, my mother in the nursing home, she's got her chat bot to, to, to talk with. And, and you start to undermine what really is important about human relationships. And that's the experience. So this is my son who, and this is my mother, right? And the, the, the beauty of that relationship is just, just gone, right? That know, the, the, that, that, if you're just saying, oh, all she needs is a chat box to, to, to, to talk with. And that's, that's I think the biggest concern that I, I would have with, with, with ai, is that it, it, it could lead people to, you know, I have a, a, a chat box artificial intelligence to interact with, and I'm good. And, and those true messy human relationships that purify us, right? That, that we have to sacrifice and we have to do things that we don't want because we're in relationship with this person start to dissipate.
Matthew Leonard:
Well, it seems like it's essentially just the next iteration of what we already see with social media, right? I mean, people already cut off from other people. We are having these video relationships with each other, and, and people you don't even know, like, you know, friends on Facebook or whatever it might be, you don't know them really. Or you're just consuming videos where you think you get to know them and you're really not. And this kind of virtual technology, this AI just seems to be next level with that.
Dr. Dan Kuebler:
Yeah. I, I think so. And that, that's the, the, the, the concern that, that even, you know fragments you even more. Cuz even with social media, there is an actual, well, who knows? There's a lot of chatbots out there on social media, <laugh>, but usually there's another person on the other side,
Matthew Leonard:
Actual person. You're right,
Dr. Dan Kuebler:
Usually, usually, but, but you say you don't have a real relationship with them in the way that you do with a brother or a neighbor or a coworker, that you actually share space with them and and, and, and so forth. But with the, these ai, you know, robot ais that, that you interact with, that, that, that, that separates you another, another level. And I, I think that you lose what really is remarkably human about it. You know, you see these AI things that can do art, right? They can, can mimic there's, you tell it what you want. I want a painting of you know, three people that's in the style of Caravaggio, right? And it will throw up something that looks similar and you say, well, look, AI can do art, but what makes art, art, why I want to look at art is because I want to ponder how the heck did the artist come up with this scene here?
And why did they come up with this? What was going on in their mind? They were so taken by, so this moment in the gospel, the Caravaggio paints, you know you know, the the, the painting with Paul and this position and looking in this orientation, you take away the human element of art. Art isn't art anymore, right? You, you, you want to, when you read like a, a like a doki, you wanna think, okay, what is, what is he trying to get a apart up to us from his experience? And you, you're, it's a shared lived human experience. And when you ask ai, I'd like to write a country song. Like you were saying, okay, what country songs? I know how much is art? I actually, I, I I love country music <laugh>, so I'm not gonna dis, but, but, but, but what, what it I, you, you want to feel the human emotion of the person that wrote the song, right? And you share that experience and that, that that's gone. That's, that's what art is. I, I think, and, and we lose that. And so it's not art really <laugh>,
Matthew Leonard:
Right? It's artificial, obviously artificial intelligence, right? And so I think one of the things that's lost here is that the kinds of things you're talking about, painting and writing, all these kind of creative things, the beauty that is created out of these things is a reflection of God, right? Because God is truth, beauty, and goodness. And so the beautiful things that we create are reflections of his own beauty. And that's just lost because you're just talking about this autonomous collection of data that's throwing things together. And that's where the artificialness comes into it. And, and I guess, you know, do you want artificial in your life? And to some degrees, it helps us, right? We, we already know that there are some good things about it, and that's one of your main points. We shouldn't run from this because there are ways it can help us. And yet you're saying it has its limitations because it can never be fully human.
Dr. Dan Kuebler:
That That's right. And there are certain things, you know, if you're writing like I think like marketing copy and web copy, you know, it could come up with something for, you're gonna have to edit it. You're not gonna trust it to, to do it, but it can help you you know, write things that, that it, and do things faster. But if you're, you really need a human still overseeing it, because there are so many things that you can't program into ai, things that, that humans recognize immediately because of our relationships with other humans, <laugh>, that AI doesn't necessarily recognize. And I think art is the that part excellence, because art reveals the artist, right? And so if you have AI generate a picture for you, it doesn't, you know, reveal me. I just told it to, to generate a picture.
Mm. But why is, you know, this person in that part of the picture, why are they in dark shade? And this one's in light shade, and there's all the, the symbolism and the meaning behind the art that only comes because the artist put that meaning in therapy. The artist created that meeting that, that, that dissipates. You know, if you ask AI to write a novel, right? And there's a side character that's in the novel, well, why is that character in the novel? Ai just put it in there, right? But for an author, the side character, usually there's a reason, you know, that, you know, Shakespeare puts the, the Eddie character into his plays, for example. And you can see they have a role and they, they have a certain human connection to you, and they illustrate certain points. And and, and, and that, that type of thing, I don't think, you know, AI's never going to be able to to do, because it doesn't have the human experience that, that, that, that we do.
Matthew Leonard:
Okay? So even though they don't have the ability to become human, and you can't upload a soul into a computer and all the rest of that, you know, I, I think that still one of the underlying fears out there surrounding AI is that we're gonna end up in some kind of a terminator scenario. We're a bunch of muscle bound, Arnold Schwarzenegger, Androids are running around terrorizing humanity. So I mean, will AI or can a AI take control of human society just because it can become smarter than us? Is that a possibility?
Dr. Dan Kuebler:
Yeah. I don't think, you know, like this idea that AI is gonna take over the world and we're gonna be, you know, kicked out the back door, is, is a, a realistic scenario partly because it's a tool that, that, that we use. But can it lead to a lot of chaos? Yes, definitely. Just like, you know, a computer virus, you know, somebody can launch a virus and it can cause all kinds of chaos. Just like, you know, like nuclear power can cause all kinds of chaos if, if, if unleashed improperly. I think AI algorithms could cause chaos and shut down networks and cause problems and so forth. But it's not as if AI is going to cause you know, super race to, to, to these terminators to, to, to come. But we do certainly need to have you know limitations or, or filters in place for AI to, you know, to, to these program, you know, if you generate an AI program that's controlling some type of network so that it, it's self-contained and have so safety limitations on these things so that they don't cause havoc, just like, you know, a virus or computer hacker can cause havoc, right?
So the, the, there are concerns, the safety concerns about that. But, but these ideas that you're gonna have Terminator reports taking over society and you know, shutting off the power grid and so forth I think are, are quite overblown AI's nowhere near that at this point. And I don't think you're, you're, you're gonna have that any anytime soon.
Matthew Leonard:
No, Dan, did you actually even watch the Terminator? Cuz you kind of missed the eighties. All right.
Dr. Dan Kuebler:
I did miss the, you know, I've seen clips of it, you know, so I do know the cultural women, but no, I never saw come. So the eighties were a lost decade for my cultural developments here, right? I never, never
Matthew Leonard:
Got, it's not the sticking points in our relationship, right? You just can't relate to genre. Another, because my mind is the cesspool of American pop culture, and you're just like, well, I've just never seen
Dr. Dan Kuebler:
That. Don't <laugh>, my wife did. My wife got all day. So she fills me in on all the eighties references. So, I dunno, we've been married 26 years, so I still haven't caught up. So, <laugh>,
Matthew Leonard:
One of the things that kind of occurs to me though, in this, and we're talking about Terminator and, and this kind of thing, is that it would seem to make, there's a possibility anyway, where you could see where it would lead to even more violence in that. And it's already happening, right? We already have drones and mm-hmm. <Affirmative> and all kinds of warfare. That's not human necessarily, right? If it takes that to the next level, and you don't have two guys coming together anymore with like a sword and a shield where it's, it's personal contact, which also serves to be a detriment to war. Cuz guys don't really want to go in there and hack each other to shreds regardless of how gladiator another eighties reference, or Right. I think it was eighties, however glorified. It makes it look war's ugly. It's brutal. It's awful, right? And so this is one of the things that I think about, does it make it easier for violence to take place? Because humans aren't directly involved, and so you can kind of take yourself emotionally out of it and more people will be hurt.
Dr. Dan Kuebler:
Yeah. I, I think more people could be hurt. I don't think it's gonna make war any l more or less likely necessarily, because I mean, look at like the conflict in the Ukraine. You have drones and you can have missile strikes that it's already wars become relatively impersonable impersonal. And, and there's civilian strikes all the time. They're gonna, cause like if you have, you know drone armies, cr you know, fighting each other at the same time, you're gonna have drones attacking cities where humans live, right? So I think that that, that there is always going to be a human cost. It might not be that as much of a you know, a, a man on man or woman on woman struggle that you have going on is, is in clashing. But I don't, I think the human cost is still going to be staggering with these, with, with, with new technologies that as AI is used more and more for warfare the human cost is what the, is usually what keeps us off of warfare, right?
And now the human cost, I think, you know, with AI could be even more random and sporadic. Whereas before it was, it was more the combatants were the ones at risk. Now with, with AI technology and drone strikes that can occur anywhere, you know, and mm-hmm. And I think it makes it even more likely, you have collateral damage and human cost could be even more, which might help limit warfare. I, I, you know, I, I I, I don't know, but I, I don't think it I don't think we'll see, Hey, let's put these two cyber war armies out and let them go at each other. You know, I think you're gonna be using cyborgs to target like dams or things like that to disrupt and cause human misery and suffering. Let's just, you know, just new, new, new ways to cause human misery and suffering.
Matthew Leonard:
Mm-Hmm. <affirmative>. So we have that to look forward to. <Laugh>,
Dr. Dan Kuebler:
I'm always an optimist, you know,
Matthew Leonard:
Oh, that's great. Getting back to like jobs and things like that. And maybe more, the more mundane aspects of the ramifications of this is that it, it seems that white collar jobs would be the kind that are more under attack from, from ai. I mean, you, I can't imagine, you know, like an AI plumber coming over to your house or no. Is that how you see this playing out?
Dr. Dan Kuebler:
Yeah, I think there's gonna be a reshuffling of a lot of white collar jobs. I know, like if you're a plumber, you're in good shape, right? You know, cuz plumbing is not routine Every time, you know, a plumber comes to your office. I've never seen anybody put pipes in here like this before. It's always new, so you can't look up an algorithms, everything, you know, so these manual jobs like that, I think, you know, you, you, you're AI is not gonna be well suited to, to take those, those over. But, you know, a lot of sort of maybe simpler, low level white collar jobs where, you know, you're, you're writing copy or doing research or writing simple programs you know, a lot of that might go away and there's gonna be, you know, higher level jobs where you're, you're, you're curating, okay, the AI's built this program now you gotta make sure it works, it integrates correctly and, you know, doesn't have any bugs or things like that in there, but there's still gonna be some quality assurance person there.
Nobody is going to I think at this point least let ai, you know, update and write the copy for their website and just put it up there, right? But somebody could use AI to write the copy for their website, but somebody's gotta check that and make sure there's not some cultural reference that got missed or something that's gonna insult people that they didn't catch or the AI didn't. So there's still gonna be these oversight jobs, I think. But I think there'll be a lot of, you know more, there'll be more efficiencies and, and, and but AI is gonna create a lot of jobs I think, you know, in terms of you know, being able to build and curate these algorithms and, and, and implement ai. So you know, just like any other technology, it's gonna be disruptive to, to, to some extent. Just like, you know, the internet was disruptive in certain ways of making a living didn't, didn't survive the, the, the internet. But a whole bunch of other ways of making a living came, came from that. I think a similar thing is here and it's, it's just, you know, recognizing the human cost of any time you have transitions that some people are gonna need, you know, support and help as they have to find a new job.
Matthew Leonard:
I wouldn't have thought there would've been any danger of AI priests. And then I was just at a speaking event in Michigan this last week and someone told me there was a church, I don't think it was a Catholic church, but there was literally a virtual priest that was there celebrating whatever the liturgy was. And I was just like, you got, it was like max headroom another eighties reference. You're gosh gonna have. But that's what it kinda reminds me of, like this hot head that's up there just talking, you know, it's like, man, you gotta be kidding me. These people are out of their minds. I thought it was bad enough when somebody, I think it was an Anglican church in, in England had a, what they called a U2 tourist where all the wow music was from u2. Right? I thought that was bad enough. Now you got an AI priest up there doing this, all the wheels have totally come off.
Dr. Dan Kuebler:
Yeah. And that's, well it's the thing, it's like you could ask chat to write a homily for me and it would probably write you a homily and it might actually be better than some homilies that I've heard in that - eighties homilies, the eighties
Matthew Leonard:
Homilies not at our church. Cause our church is rocking.
Dr. Dan Kuebler:
Exactly. No, if like Father listens to this. I'm not talking about you Father, like, don't banish me to the cry room or anything. No, no. But, but, but it doesn't, you know, what is a homily for? It's, it's there to, to meet the needs of your parishioners. And AI is never going to know the needs of your parishioners. Cause that's the human relationships that have to be built upon experiences. And so that, that human aspect against one of those things that AI is never going to be able to overcome. But, you know, just like I can have AI write my lectures for me, probably, and I don't know, some of my students probably wouldn't know the difference <laugh>, but it's, it's like, is when I have to make them, you know, in lecture, I have to think of different ways. I can't just spout off information cuz they can get the information. I want them to think critically, ask 'em questions, let's discuss this. Things that go beyond sort of just, you know, exchange of information. Cuz now information is so easy for people to get and to curate. I want them to go beyond that. Let's think critically about this information. What's good information, what's bad, what do we rely upon, what don't we rely upon? And that's something that we, you know, being informed consumers, Catholics need to do for everything. And AI is just another sort of layer that they've gotta make sure they're informed of.
Matthew Leonard:
And I think that the, the kind of the under underlying layer of all of this and what you're talking about, of this need for interpersonal communication that AI could never replace that is because all of us are made to be in this personal relationship with each other because we're made to be members of that interpersonal communion, which is the Holy Trinity. Right? Right, right. And so if that's our fundamental need and that's what it is that we were made for, then this communion can can't be replicated by someone who's not person. Right. And they can't fill that need. They can spit data out, but you can't relate to one another in the way we are made to be.
Dr. Dan Kuebler:
Right? And they can mimic what a human is, but they are never going to be a a, a human right. These, these AI algorithms, they, you can mimic sort of human conversations, human behaviors, but they don't, because they're not human persons. They don't have this, this, this transcendent view of, of understanding of, of reality, or that there's something more to me than just my material being that, that, that, that, that's always going to be lost on, on ai. And this was something that, you know, I talk about in the, in the purposeful universe, we talk about overall that the all of reality is relational, right? From, you know, e even if you start, you know, as Catholics from the Trinity, which is relational, right? It is a God is a relationship and we are in relationship with God, and then we're in relationship with our family and our everything.
But even if you look at nature, everything is relational. So you can't look at anything in isolation. Everything is a relationship from physics to chemistry to, to biology. And I think that the idea that we're, that we're in a relational world reveals the, the the need, the human need for interpersonal relations to make us who we are and predominantly that relationship with God to make us the person that we're meant to be for all eternity. And you can't turn your back on those relationships and, and expect to be the kind of thing that God wants you to be.
Matthew Leonard:
No, that's beautiful. I don't think I have any more questions on this, but where would people go if they want to dive a little more into this? Where can people find more of your work? Dan? you know, what, what can I link to in the notes below? Do you wanna push them towards your book? You want them to look at purposeful universe? What, where would they go?
Dr. Dan Kuebler:
Yeah. Purposeful Universe would be the place to go if they're more interested in these type of talks. We're gonna have, you know, some some videos up there and some podcasts on ai you know, Society of Catholic Scientists we're, is another place to go if they're interested in this. That's got a great website. There's some, some stuff about the limitations of you know, sort of a material understanding, which is like you hit, is that the basis of this idea of that AI can be human, right? It's a philosophical problem. And the Society of Catholic scientists website's got some great articles there about the about neuroscience, which gets to this point that we are more than just our brain, which we're more than just an algorithm. We're more than just ai. Right? And so those would be places I think your listeners might get some, some good resources. Those two, those two places.
Matthew Leonard:
Well, I'm glad we're more than that. And when terminators start running around, I'll have you back on the podcast and you can explain to me why you were wrong, <laugh>.
Dr. Dan Kuebler:
Exactly. I wanna see the AI Matt Leonard's next podcast and
Matthew Leonard:
See what, let, let's see. No, because if they start podcasts, I'm in trouble, right,
Dr. Dan Kuebler:
This is the thing. They'll never be able to do that. And with your grace and humor, you know.
Matthew Leonard:
Exactly. You want, you want me to buy you dinner, don't you?
Dr. Dan Kuebler:
Exactly. They'll be able to come up with your eighties references as long as we feed them all that data. But I don't know if they'll be able to.
Matthew Leonard: Nobody's gonna come for my eighties references that didn't live through those eighties. All right. <Laugh>,
Dr. Dan Kuebler:
The experiential knowledge that you have, the AI will never, they can, they can reference the movies, but they never experienced them.
Matthew Leonard: It's getting terrifying at this point. Cause when I'm giving talks and I make an eighties reference, half the crowd now doesn't know what I'm talking about. And I'm like, oh man, I can't relate to them <laugh>. I'm not diving back into pop culture again. I've been there and done that. I'm over. So they're gonna just have to go back and and get retro, cuz otherwise it's just not gonna happen. But regardless, thank you so much, Dan. This has been great. And when your new book comes out, we'd love to have you on to to talk about it and we look forward to what you guys are doing on the podcast in, in the coming summer.
Dr. Dan Kuebler:
Yeah. Well thanks for having me on, Matt. Keep up the great work. Love what you're doing.
Matthew Leonard:
Thanks Dan. God bless you. Hope you guys enjoyed that. Dan is a very good friend of mine and a very sharp mind. Now, if you haven't already, make sure to check out the two week free course called Catholic Mysticism and the beautiful Life of Grace www.scienceofsainthood.com. It will totally open your eyes to the meaning of Catholic mysticism and how grace works in your life. And guys, grace is essentially the life of God being communicated to us. It's what Father LaGrand calls the seeds of eternal life. We need to know what it is and how it works so that we don't waste it. So if you don't know about the inner workings of actual grace versus sanctifying grace, or operating versus cooperating grace, and a whole host of other basics of the spiritual life, check out the free series again. You can find it at scienceofsainthood.com. It's called Catholic Mysticism and the Beautiful Life of Grace.
Also, don't forget, I'm heading out on a five star pilgrimage to the Holy Land in April, 2024. It's a once in a lifetime trip that will change your view and practice of the Catholic faith for the rest of your life. It's that transformative. We'll start with three nights in a beautiful hotel on the peaceful shores of the Sea of Galilee, and visit all the northern holy sites, including Nazareth, Mount Tabor, Cana, and more. Then we'll head into the very heart of the Holy City itself, Jerusalem, where it all happened, Gethsemane, the Mount of Olives, the upper room, and of course Golgotha and the Via Dolorosa. We'll also go to Bethlehem see the location of the Dead Sea Scrolls, have the opportunity to swim in the Dead Sea itself, as well as a whole bunch more incredible sites. And we'll do it all from a five star hotel in Jerusalem.
So private coaches, daily private masses, incredible food, and a lifetime of memories. Put simply, it will change your life. To learn more, go to scienceofsainthood.com/pilgrimage or look for a link below. So that's it for now. Let's remember to pray for each other, right? And do our best to grow in true charity so that we can become saints. Because at the end of the day, that's all there is, right? And don't forget the true charity means that we're acting out of love, not just to God, but also to neighbor. And neighbor doesn't just mean the people I like. As the parable of the Good Samaritan tells us, everyone is our neighbor. Everyone we meet, whether or not they treat us well, needs to see the, the love of Jesus Christ pouring out of every pore of our body. We need him to penetrate us as deeply as possible so then we don't just reflect His light and love, we become his light and love. We're called to be little Christs helping to transform and save the world through Jesus. Right? That's what it's all about. That's the essence of the Catholic life. And few people knew that better than St. Paul. So let's close this episode as we always do with the words of Romans 12:12. Say it with me: Rejoice in hope, endure in affliction, persevere in prayer.