
Join the Denison Forum podcast with host Mark as he discusses the complexities of Artificial Intelligence with Dr. Katie Frugé, the director of the Center for Cultural Engagement with Texas Baptist. This episode examines AI’s potential as both a tool and a risk in our daily lives, the ethical concerns it raises, and its possible benefits in ministry and Bible translation. Explore how AI has evolved, its current applications, and how Christians can thoughtfully integrate and regulate this technology in society.
Powered by RedCircle
Topics
- (00:17): Exploring Artificial Intelligence: Tool or terrorist?
- (00:48): Meet Dr. Katie Frugé: AI enthusiast
- (02:23): The evolution of AI: From hypothetical to everyday use
- (06:41): AI in our daily lives: Integration and impact
- (19:41): Generative AI: Creating new content
- (22:08): Ethical concerns and human bias in AI
- (27:58): Human intelligence vs. AI
- (30:20): The Tower of Babel analogy
- (33:32): Regulating AI development
- (37:14): Global AI race
- (39:10): AI in ministry
- (50:46): Learning about AI
- (55:47): The impact of AI on society
- (59:17): Concluding thoughts
Resources
- The Artifice of Intelligence: Divine and Human Relationships in a Robotic Age
- AI and Faith Website
- Hard Fork Podcast
- The Center for Cultural Engagement | Texas Baptists
- The Washington Post Tech Brief
- The peril of AI and the path to transcendent hope
- Winners of the Nobel Prize in physics warn of AI dangers
- “I have the power to manipulate, monitor, and destroy anything I want”: The threat of autonomous AI and the unique response of biblical faith
- Is DeepSeek AI a “Sputnik Moment?”
- Google AI makes breakthrough in biology
About Dr. Katie Frugé
Katie Frugé, Ph.D., earned her Master of Divinity degree and Ph.D. in systematic theology from Southwestern Baptist Theological Seminary. Katie Frugé has been named director of Texas Baptists’ Center for Cultural Engagement and the Christian Life Commission. Frugé began her service with the Baptist General Convention of Texas in 2019 as the hunger and human care specialist with the CLC. She later took on the role of associate director of the CLC.
About Dr. Mark Turman
Dr. Mark Turman is the Executive Director of Denison Forum and Vice President of Denison Ministries. Among his many duties, Turman is most notably the host of The Denison Forum Podcast. He is also the chief strategist for DF Pastors, which equips pastors and church leaders to understand and transform today’s culture.
About Denison Forum
Denison Forum exists to thoughtfully engage the issues of the day from a biblical perspective through The Daily Article email newsletter and podcast, The Denison Forum Podcast, as well as many books and additional resources.
EPISODE TRANSCRIPT
NOTE: This transcript was AI-generated and has not been fully edited.
[00:00:00] Dr. Mark Turman: This is the Denison Forum podcast. I’m Mark, your host for today’s conversation as we continue to try to equip you with clarity in a world that’s cloudy, confusing, sometimes corrupt, and in many ways very uncertain. And that’s gonna probably be the feel of the conversation today as we jump in and try to learn a little bit more about this wonderful thing you see in the news almost every day called artificial intelligence or ai.
And I’ve kind of subtitled this. I. Thunder tool or terrorist. Because I have all of those different kinds of impressions when I start to get into this boat around artificial intelligence. And what does it mean? How does it work and what is it for? Our conversation partner today is familiar to many of you.
Dr. Katie Frugé is the director of the Center for Cultural Engagement with Texas Baptist, which represents 5,000 churches across the state of Texas. And she and her department help to equip and to serve believers and particularly Texas Baptist churches in a myriad of ways. So welcome back, Katie. We’re glad to have you with us.
[00:01:14] Dr. Katie Frugé: Thank you for having me. I’m glad to be here.
[00:01:16] Dr. Mark Turman: So you are the resident expert in the state of Texas for Baptist churches on AI. That’s what we need to tell everybody, right?
[00:01:25] Dr. Katie Frugé: Yeah, yeah. Absolutely not. That’s gonna be my big caveat is I’m a curious listener to this. I really kind of started getting AI curious starting in about 2020, 2021 when then executive director, Dr. David Hardid, asked me to do a presentation at a conference called Future Church. And I believe you were one of the speakers there as well. And the assignment was. Tell me what’s coming that I need to know about, that I don’t know about. And so that’s really where the whole thing started is I was just kind of looking out there, seeing what, what’s out there, what are people talking about.
Started by looking at just online resources and different things. Talked about the metaverse in that particular presentation, RIP Metaverse, that actually died a couple years later. But that’s really what put me on the journey of trying to figure out what, what’s out there and what do Christians need to be aware of.
And really kind of started taking off bigger, and I know we’ll go into this later, but started in 2022 when chat GPT became available to the public. It just exploded.
[00:02:22] Dr. Mark Turman: Yeah. And so how would you over the last five years, how would you describe what your perspective has become, kind of from where it started to where it is now?
[00:02:32] Dr. Katie Frugé: In some ways where it started was more of a, could be hypothetical situation. There was a lot of unknowns about it. It felt very futuristic. It felt kind of more like something we’d seen in a movie versus a reality. And what we’ve really seen over the past couple of years is this thing is coming and it’s real, and we’re starting to see it integrate into our lives some helpful ways, some confusing ways, some ways we don’t even know AI is touching our lives.
But we’re still using it every day. And it’s a pa, it’s a piece to what we’re doing and just our day-to-day functions now. And so now what I’m looking at is a reality where we’re probably utilizing ai daily without our even being aware of it. And I think that’s gonna continue to grow even professionally.
I use it. Regularly now as well as a tool. And I think you’ll see more people doing that, is finding ways to utilize it, to make their lives more effective, efficient for better, for worse.
[00:03:24] Dr. Mark Turman: Yeah. And there’s just so many things to look at here and, and so many resources, so many different voices in this conversation.
Yeah. And it just seems to be growing more and more coming out of places like Silicon Valley and other places where there’s just a lot of attention, a lot of research, and a lot of development going on. So I just was thinking about how to have this conversation with you, with you, Katie, and trying to do some of my own research.
And I, I went back to those middle school English class questions of who, what, when, where, why, and how. As maybe just a way to try to frame this for people who are like they hear different things. They may be super excited about it. They may be learning, like you’ve been learning. Some people are just going, oh, what now?
Yeah. And thinking about it in those terms but let’s just start with a little bit of the who. Is it even appropriate to say who invented ai? Where did it come from?
[00:04:21] Dr. Katie Frugé: Yeah, not really. I think the way that you could really think about AI is it’s more of a tapestry that’s been coming together for a long time.
It feels really new, but it’s not these conversations. If you wanna take a philosophical approach, we’ve had conversations about this for a long time, thinking about what does it mean to be human? What does it mean to think, how do we understand knowledge? And so that goes. Back centuries. But really this conversation, a lot of, a lot of people would point to the mathematician, Alan turning as kind of the beginning of this whole conversation about actual artificial intelligence.
There’s a thing called the turning test, which is a machine capable of learning. If you’ve seen the movie, the Imitation Game it’s a great movie but it’s the true story of kind of how the beginning of computers started can a, can a machine think. And that was really the beginning of it. And that was in the 1940s and fifties.
And then you had, I believe it was in 1956, the, a formal idea of artificial intelligence officially launched. So this is not a new idea, it’s something that’s been around for over half a century now, but it’s just kind of continuously, you kind of saw it growing in very small baby steps along the way where we started with a hypothetical we had a mach, you know, we know that machines can learn.
We see the growth of the computers and different things like that. What’s happening now, and the reason it feels like it’s kind of. Breakneck, you know, just kind of hitting us all at the same time is because we’ve really seen a, a jump in it, especially the last couple of years that really where we used to see incremental growth, we’re seeing growth at paces that are even surprising.
The researchers themselves.
[00:05:53] Dr. Mark Turman: Yeah, I was doing some work with this, had a conversation I mentioned before we started recording. A friend of mine named Dr. Hamus Ott, who is an Iranian born believer, has an active ministry even today that stretches from Texas to Iran. And he came to the United States in the early eighties from Iran to study, went to the University of California, and he actually holds a PhD in artificial intelligence.
So I thought he’s the expert, a big good person to talk to. He’s not actively engaged in that kind of research now, but was back in the mid eighties, he was writing papers, presenting papers. So he gave me a little bit of a, a sense of the, of the history, uh mm-hmm. That as you said, you kind of take this back to the late forties, 1950s, the idea of art, artificial intelligence.
Which we commonly call today, just the computer in some ways. But the idea of something that we created as a machine, being able to process and think and do calculations and, and other types of activities, that there is a sense in which it’s been around a while. But then I learned from other writers like Thomas Friedman and others that when you get into the sixties companies like Texas Instruments and others stumbled on or came upon the discovery of what’s called Moore’s Law.
The ability basically of a computer to process enormous amounts of inver, of information. Yeah. Many of the technologies that we walk around in our pockets with our cell phones are driven by Moore’s Law. And then like you said, in recent days, there’s been just a massive kind of acceleration in super computing.
And that’s enabling some of these technologies and applications and, and some of the things that now have brought it to the forefront as well as the fact that it’s big business. I mean, there are huge companies, not just Google and others, but there are huge companies making huge investments Meta’s doing that and, and others are doing that.
And so there’s a lot on the line from an economic and business standpoint. Absolutely. But, but I gotta tell you, it’s a little bit comforting to me to hear you and others say, Hey, this didn’t show up. You know? On the heels of Covid, it feels a little bit,
[00:08:08] Dr. Katie Frugé: it feels, it feels like it. It suddenly, we just woke up and we all knew chat GPT.
We didn’t know.
[00:08:13] Dr. Mark Turman: It feels like we woke up in the spring of, of 2020 and this, this thing called a pandemic. And, you know, we started seeing things that we’d never seen before, and then just about the time we thought we could take a breath, then somebody said, oh, and there’s this thing called ai. Yeah.
Yes. Anyway, if you’ve, if you’ve ever dealt with a computer or a phone, or you’ve talked to your Google Home or your Alexa mm-hmm. You’re probably talking to some form of AI or engaging with some form of ai Absolutely. Or your
[00:08:41] Dr. Katie Frugé: Siri on your smartphone, anything like that. Mm-hmm.
[00:08:44] Dr. Mark Turman: Yeah. All of those things are powered by.
Supercomputing around this big umbrella called ai. But you mentioned chat, GBT and there are some others that are becoming more and more commonplace all the time. Yes. And we’ll get, we’ll get to guidelines, boundaries, parameters that, that you and others might recommend. But like even here at Denison Ministries, we’re using a couple of these tools like Gemini and that type of thing.
A lot of times we’ll turn those tools on to record a meeting where there might be 5, 6, 7 people in that meeting. We might discuss various things for an hour. We get through and we hit a button and that tool gives us a one paragraph summary of what this meeting was about, and then we’ll give us bullet points and even assign task as they were talked about in the meeting.
Isn’t that incredible? Wow, that’s, that’s an, you know, our administrative assistants are like, wow, this is fabulous. I hope it doesn’t take away my job. Totally. Yes. But it really helps. So those, those kinds of tools, Katie, are built on. What I have come to understand under this banner of large language model.
Yeah. Can you kind of give us an explanation of what that means?
[00:10:03] Dr. Katie Frugé: Yes. I’m gonna give you the kindergarten definition of this. Okay. Because Thank you. That’s about where I go. Thank you. We’ll, we’ll, we’ll, we’ll default to the other experts if they need more technical things. But basically what these LLMs are doing is it’s, it’s programming that’s intended to mimic human thought patterns.
It is looked at all of the words that are, you know, available on the internet, and it’s scrubbed all of them. And it’s looking for predictive text so that it can try to emulate or sim stimulate, simulate what a human’s doing. And we do this all the time without even realizing it. When I give my workshops on this, the example I give all the time is when I say Twinkle twinkle little star.
Immediately in your head, you’ve already pre-programmed how I wonder what you are. That’s basically what predictive text is doing, is it’s looking at the most likely word to come after the one that you’ve just said. And that’s what these LLMs are doing. So you give it a prompt and it’s going to, you know, for lack of a better word, think it’s not thinking, but it’s looking at all those things and making mathematical calculations to figure out what’s the most predictive thing I need to put out next after this.
And so it’s essentially, it’s trying to mimic human-like thought patterns in a very interesting way.
[00:11:13] Dr. Mark Turman: Okay. And so now I’m, I’m really concerned that I have to learn acrostics and that type of thing because you refer to it as an LLM, so
[00:11:22] Dr. Katie Frugé: Yeah, yeah. Large, large language models. Yes.
[00:11:25] Dr. Mark Turman: So yeah, in this wonderful world, and like I’ve listened to a few podcasts, read some articles, and all of a sudden you’re like, oh, there’s a whole language.
It’s yes. That the the developers and people that are, are creating these technologies, they’re using an entire language that takes a while for the for the learner to catch up to, right?
[00:11:44] Dr. Katie Frugé: Yeah. And some of it is just even learning through intuition. A lot of times people just go on and you just start kind of talking back and forth with the chat bot or something like that.
Mm-hmm. So you go to chat GPT, you can go to Gemini, and the better you are at communicating with it, the better it is at coming back at you then. So it’s, it’s learning you in real time too. It’s a fascinating experiment just to go on and practice. And you can almost see from my perspective, I see. I’ll also just own this in my family, I see a generational difference.
It’s more intuitive to my 13-year-old than it feels to me. She’s much better at prompting and getting the information that she wants out of it. Yeah, when you’re talking with these models, the more you can give it feedback, the better it is at learning what predictions to make to give you information back.
[00:12:25] Dr. Mark Turman: Okay. Can, and this is a curious curiosity I have, why are they calling them models?
[00:12:32] Dr. Katie Frugé: Oh I actually don’t know the answer to that. I would assume because it’s I don’t know. We could, we could actually chat GPT.
[00:12:40] Dr. Mark Turman: Yeah, we could. Yeah. It’s and again, that’s just part of this terminology. Yeah. When you get, when you start swimming in this in this pool, you’ll find out very quickly that it’s pretty deep and it’s getting larger by the day.
And I think it brings
[00:12:55] Dr. Katie Frugé: up an important piece though. I I think that we need to be careful as this becomes more of the zeitgeist of our day. The, the language we use even is, is anthropomorphic. That these aren’t human things. And I think it is, from a Christian perspective, it makes me think about what does it mean to be human?
I think we need to have answers and robust conversations about humanity itself. And what does it mean to be a human? What, what gives human value? What gives human, its distinctiveness? Because we are increasingly seeing these programs that are using humanlike. Language, but they’re models, right? Like they’re based off of what we are.
And I think that, that we’re gonna start seeing a blurring of the lines between what’s real and what’s not real. And I, I think it’s gonna cause some concern for a lot of people. I think people will be tempted. I think there will be a, a, we’re gonna lose touch with reality times. You already are seeing tragic stories of people who are quote unquote, falling in love with these models.
It’s not a real human, it’s not a real relationship. And somehow they know that in their head, but they, they’re, they’re still being deceived into that. Even the New York Times came out with a story think it was a week or two ago of a, of a woman who’s married to a, a real man, but has a partner that she’s in love with on chat, GPT.
And I think we’re gonna see more stories like this. So I think the bigger issue in some of this related to just your question about models and things like that, is just the, the question of language itself. We need to be careful not to over anthropomorphize using human words to apply to a unhuman thing.
Yeah. Because I think we’re gonna see that a lot.
[00:14:35] Dr. Mark Turman: Yeah. And that’s a, a really, you know, and that’s a good call out that words matter, right? Yeah. Words and definitions really matter. In terms of our understanding and comprehension of just about everything, they, they are a great deal and interesting you brought that up.
I was talking again to my friend yesterday and he said, you know, people, you, you’re able to develop an emotional attachment to this thing. Hmm. And I said how in the world? He, he, he like, how in the world could that happen? He said just think about your pet. If you have a dog or a cat, particularly a dog, and you develop an emotional attachment.
To your pet, right? Mm-hmm. And you think that your pet at some level thinks about you, quote unquote. Okay? And when you come home and your dog runs and greets you and you know, and you give the dog commands and he responds, and that type of thing, that builds, you know, there, there’s thousands of people that would testify, you know, that their pets are like family members and they treat them as such, right?
[00:15:36] Dr. Katie Frugé: They call ’em fur babies, right?
[00:15:37] Dr. Mark Turman: They call ’em fur babies, right? And and we all know that it’s really tragic when your pet comes to the end of his life. And, and that, and then those kinds of things that, you know, have been a part of your life. So we have some we said, he said, but imagine that with a a chatbot or even an actual robot of some kind being in your house that actually knows you better than your pet does.
Knows you better because it can acquire more knowledge and data points about you. Which kind of bring me, so anyway bring this around to this way. So Katie when you’re sitting there typing a text message on your phone and it auto corrects your text message or gives you a suggestion of what your next couple of words would be, that is an AI generated tool, right?
That’s creating, that’s correct. Those auto corrections and that, and those word suggestions. So one of, one of the terms in here, especially around a large language model, is that it quote unquote scrubs the internet for mm-hmm. Data points or for what are called data sets out of which it does its work.
It does. Its guessing as to, you know, how do you finish Twinkle Twinkle little spa star? Talk a little bit about what it means for these tools to quote unquote scrub information from the internet so that it, it gets smarter that way. What does that mean to scrub the internet?
[00:17:07] Dr. Katie Frugé: Yeah, it, that’s one of the more controversial parts to this as well, because you have these programs that are digging into areas where people did not give consent, right?
That maybe they posted something online intended for a very specific audience with no intention of allowing an algorithm than to go in. Study it and now it knows me and it’s targeting me to manipulate me maybe, or to provide something that it thinks, you know, it might, I might want. And so that’s what it’s really doing is it’s, it’s, it’s going onto all these available resources, sometimes going through paywalls.
I even yesterday I was able to get chat JPT to get behind a paywall, just using different prompts just to see if I could Wow. And so there’s some securities concerns with that as well of, you know, these programs that are growing in their ability to you know, we can say manipulate me. You could say, read me better.
You could, you know what, pick your poison. But they’re, they’re looking at all the different information and that’s how they’re quote unquote learning. And that’s where a lot of the transparency and. Concerns that people have there is the lack of transparency. How are they getting access to this?
Are people giving consent to this information? What information are they being trained on? We don’t know the answers to a lot of that. This is just a kind of a fun side anecdote, but the reason TikTok is so popular is because of the algorithms. They are uniquely good at tailoring and scrubbing and learning its user really fast, and being able to put con customized content in front of you very quickly.
And their algorithm is the kind of the secret sauce that makes it successful. And so that’s the big back and forth with do we sell? Are we gonna ban it? And all those things. The apparent company didn’t wanna sell the algorithms that would go along with the app itself. Pla TikTok. And so that’s, that’s what we’re talking about when you say scrub the internet and things like that.
It’s, it’s searching through certain things to be able to create a better product, but you could also say to be able to manipulate you more effectively.
[00:19:03] Dr. Mark Turman: Hmm. Yeah. So if you’ve ever had that experience, like my wife and I did a couple of days ago where you ask your computer or you ask your phone on the internet to tell you about a certain product or a certain place, and then all of a sudden you start getting emails.
That sounds, sounds, look, oh yeah, we’ve all had
[00:19:22] Dr. Katie Frugé: that.
[00:19:23] Dr. Mark Turman: If you, if you’ve ever had that experience, that’s because there was an algorithm that scrubbed the internet based off of your question or your search, or your purchase. Mm-hmm. And then started feeding you a lot more stuff in that same idea or category.
That’s, that’s what this is doing. Right?
[00:19:40] Dr. Katie Frugé: Yeah, absolutely.
[00:19:41] Dr. Mark Turman: Katie, how is that, so go a little bit further into the mysterious part of this, even further, which is, so you’ve got things like chat, GBT and other large language models. But every now and then you’ll, in these conversations, hear this term generative ai.
And that automatically starts feeling like a step further and a step deeper into the activity of artificial intelligence. I. Give us another kindergarten definition of generative AI as being distinctive from quote unquote normal ai.
[00:20:14] Dr. Katie Frugé: Yeah, I think generative ai, I think it’s even in the name itself, it’s generating, it’s able to create new content.
That’s your chat. GPT, maybe you’re gonna use Gemini or I love Claude. That’s the anthropic version. It’s it’s chat, it’s the chat bots, it’s content creation. It can create code. My daughter likes to use Dolly to create AI art, things like that. It’s creating these new things, so it’s generating within a specific, you know, data set that it’s been trained within.
So it’s gonna be language or it’s gonna be art, it’s gonna, but it’s been trained on a certain thing. And within the lanes that it’s programmer created, it, it can continue to create new things over and over again. Versus the traditional ai that’s gonna be more like. Patterns, making predictions and automating decisions.
It could be as simple as a spam filter. It’s the algorithms on Netflix or Amazon that gives you those programs. Like we said, when Amazon makes that recommendation to you, that’s, that’s more of the normal ai. It’s just kind of looking through something and saying, what’s, you know, how can I predict this in front of you?
Versus the generative ai, which is thinking ahead saying, if you like this, I’m gonna go and take you over to this next level now too. So it’s generating new things versus the normal ai, which is more pattern automating.
[00:21:28] Dr. Mark Turman: Okay. So we might, we might say that that’s like a higher level of thinking when, when you get to generative ai.
[00:21:35] Dr. Katie Frugé: So I actually asked chat, GPT how it would define this in preparation for this. And here here’s how it defined itself. It said generative AI creates something new. Traditional AI analyzes or classifies existing data.
[00:21:50] Dr. Mark Turman: Hmm. Okay, great. Yeah, and
[00:21:52] Dr. Katie Frugé: then it go, it went a little bit further. This is so impressive.
It said generative AI is like an artist, traditional AI is like a judge. So there you go. Oh wow.
[00:21:59] Dr. Mark Turman: Okay. Yeah. That very helpful metaphor for sure. Yeah. That’s really, really helpful and gives us a, a better way of framing this conversation. I want go back to what part of what you just said in a moment as a comment, which is based on its training, which leads to the question in my mind of who is training ai?
[00:22:18] Dr. Katie Frugé: I love that question
[00:22:19] Dr. Mark Turman: and tell me if I’m on the right track here. So there are software developers. There are for lack of a better term, computer nerds somewhere in the word world working for these companies. And they are creating what we generally term as an algorithm that then tells this software.
Like in, if it’s a judge, it tells it where to go look for information and what kind of information it should be looking for. Yes. And if it was in the generative area, it would be an algorithm telling that that program, that computer program to go and create things in these categories or in these boundaries.
Am I, am I thinking about that in terms of the initial. Trainer is a human being that is creating Yes. A computer program algorithm to direct these processes.
[00:23:12] Dr. Katie Frugé: Absolutely. And that’s one of the biggest challenges I think we’re we face from an ethical perspective is that there’s, there’s a human element to this that just cannot be overlooked and it is imposing.
Even unknowingly biases beliefs, value systems and things like that. If you remember, I think it was last summer Google got in a lot of trouble because their, their Gemini. I don’t think it was Gemini at the time. Maybe it was barred. I don’t remember when they changed it, but it was, it got in all sorts of trouble because it was creating all these wonky outputs be it were, it made the founding Fathers of America ethnically diverse or something like that.
Mm-hmm. It wouldn’t give value systems it wouldn’t say that Hitler was worse than Elon Musk. And if you asked for a job description of a lobbyist for oil and gas, it lectured you on why oil and gas is bad and stuff like that. Just bizarre things. Yeah. And people were really bothered by that because their, the argument was, look, this is exposing some very deep biases that were put into this program, and this is exposing at a really ridiculous level.
What I think happens subtly all the time where the programmers. Maybe unknown value systems are getting put into these systems and, and it’s gonna input or impact what happens when you say Twinkle twinkle Little star. Another example, I think that helps highlight this, keeping it at the kindergartner level.
I am I’m a mom and I make sandwiches all the time, and our family regularly makes you know, different types of sandwiches with peanut butter as and as a part of it. So normally when you hear the word peanut butter, you are gonna say peanut butter and jelly, right? Mm-hmm. Because that’s the most common, that’s the value system.
My family have a just my family has a different value system. My kids don’t have peanut butter and jelly. They love peanut butter and honey, and that is their favorite thing. I know it’s kind of weird. Yeah. But, but because of the value system, if you ask my 5-year-old, what kind of sandwich is it? She would say peanut butter and.
Honey versus, you know, somebody else who would say jelly. That’s a simple definition of it, but that’s what I’m talking about, where your own val values are going to impact what the training is gonna give for the next predictive word. You can even expand that out a little bit more. I grew up in a foam home with five other kids.
I’d five. Five brothers and sisters, and my mom made a dollar stretch really far. And so we didn’t have peanut butter and jelly or peanut butter. We had peanut butter and banana. Okay. That’s right. So that’s what I had as a snack growing up. So there’s just multiple different value systems that are going to impact these predictive models.
And at the very least. I think there should be transparency in what’s the constitution, what value systems are getting put in. And that’s one of the biggest challenges when we’re talking about regulations and things like that, is that we don’t have, that, we don’t have a unified agreement on what ethical AI even is.
What standards are we going to hold these companies to? What standards, you know, do we say we’re not gonna go past this? If you asked an LLM from that was developed in China versus one in Silicon Valley, they’re gonna have very different definitions if you asked what happened in Tmn Square. And so those are the real world implications where you almost can start seeing people living with their own realities.
You know, that based on what they’re consuming, based on the value system that’s getting embedded by a human into a computer program.
[00:26:29] Dr. Mark Turman: Right, and just, yeah, so much to think about because there’s no way that a, a single human being doesn’t have those biases and those life experiences and that type of thing, and there’s no way that that doesn’t come into the work and the training that they’re giving to these algorithms.
And so that is a, a massive area. Which kind of brings me around to this question I wanted to ask, which is, from your perspective, Katie, is AI dangerous and in what sense is it dangerous? What are there other ethical concerns other than the ones that you just laid out that we need to be thinking about and praying about and aware of?
[00:27:07] Dr. Katie Frugé: It’s a great question. And the funny thing is, we were talking before we started recording that question really depends on who you ask, depending on is it dangerous or not? There is a very funny term that’s kind of getting floated around in the tech world right now called P Doom, which is your probability of existential threat for artificial intelligence.
What, to what level do you think that, you know, artificial intelligence is an existential threat to humanity? And so if somebody’s p doom, if it’s really high, it means yes, I think we’re all doomed and the terminator’s coming next year. Hmm. Or if below P dm, it’s like, it’s not that big of a deal.
It just means series of basically everybody’s gonna have a personal assistant in the, in the future. There’s definitely a conversation to be had depending on who you ask, depending on what their P doom is and things like that. Think everybody agrees. Yes. This is a serious issue that we need to think carefully about.
For how many thousands of years humans have been the most intelligent creature walking on the earth? Mountain gorillas exist. They’re more powerful than us, but because we have higher intelligence, they live at the mercy of the humans surrounding them in some ways. Mm-hmm.
[00:28:12] Dr. Mark Turman: Right.
[00:28:13] Dr. Katie Frugé: We are looking at a possibility of a world where we are no longer the most intelligent creature thing being, I’m not sure the right word for it, but we’re looking at a hypothetical situation where there will be a higher intelligence on the planet and the implications of that.
Do we turn into the mountain grills? We don’t know. That would be a high P doom, but those are the issues at big level. That’s the meta big concern is what do we do with that? I think that there’s other concerns as well. Just going back to the, the value systems and things like that, I, I have a deep concern that we need to be very clear and hold fast to a, a robust understanding of the image of God and what does it mean to be human, especially in light of this conversation about artificial intelligence.
If humans are no longer the most intelligent being on the planet, does that impact what it means to be human? I don’t think it does, but I think we need to have a very clear and firm grasp on humanity because we’re gonna see those lines starting to get really blurred. And so that’s, that’s for me as a Christian, one of my bigger concerns is just having a healthy anthropology and understanding of what it means to be human itself and how are we going to navigate these challenges.
When, when those waters start to get a little bit muddied, what if we have a robot who can think for himself? I think there was a movie with Robin Williams years ago where he was a robot that slowly over a period of years turned into a human or something like that. Mm-hmm. I’ll have to look that up.
But I mean, that’s not a dystopian possibility anymore. So what do we do if you have a robot that has artificial or even artificial general intelligence that starts taking on human form and things like that? So those are, go ahead.
[00:29:59] Dr. Mark Turman: Yeah. It, it just reminds me a movie of a number of years ago, I think, called Eagle Eye I believe was in the same genre of computers becoming smarter than us.
And Okay. I’m, I’m, I’m gonna just go full disclosure and tell you, Katie, I’m gonna throw you a curve ball, okay? Oh,
[00:30:14] Dr. Katie Frugé: no. Okay.
[00:30:14] Dr. Mark Turman: And here’s my curve ball. Just listen to you promised no
[00:30:16] Dr. Katie Frugé: curve balls. Yeah,
[00:30:17] Dr. Mark Turman: just listen to you talk about gorillas, that type of thing. I, I just started thinking to myself, what Bible passenger story might, in some way inform here.
And I’m just like, something about the Tower of Babel just starts jumping up in my brain at this point. Oh yes. So is is there anything about the Tower of Babel that seems to apply here in your mind?
[00:30:40] Dr. Katie Frugé: Let’s not get too prideful and have a you know, I, I would not limit God in intervening in some level.
It’s, you know, I, I do worry that there will even be false religions that could start popping up if you’ve got a, if you’ve got a, you know, a, a, a program that’s claiming to be God and have secret information. And I just think the Bible shows that God doesn’t take too kindly when people start claiming his territory.
And so I do think that there would be a spiritual element of. You know, I don’t, I don’t, I’m, I’m not a prophet and, and I don’t wanna pretend like I, I know the future. But there definitely does feel like there’s a kind of a Babel type situation where we’re kind of, are we, are we building our own towers?
Trying to become God ourselves?
[00:31:26] Dr. Mark Turman: Yeah. Trying to, trying to make a name for ourselves, as it says about the people in the Tower of Bowel. But it just, you know, ’cause some of the things I’ve been reading and listening to recently, there’s a, there’s a big conversation in the AI world about opportunity versus risk, as you said even, even this terminology of a p doom scale almost feels flippant when I heard a few people talking about it that oh yeah, this could be the end of the world or the end of humanity, but we know that’s just a factor that we have here.
It just, it almost sounded flippant to me. Mm-hmm. But there’s a big conversation going on of, Hey, this is. There’s a lot of people, or, or there are at least some people I should say in this conversation who say, this is what we were always destined to be able to do. And that this is just human beings being at their best and being able to invent and create technologies that make life better.
Yeah. And then there’s a whole other group of people who are saying, I don’t know about that. They, you know, we sometimes have this tendency as we somewhat have learned with social media of if we can do it, we should, or we of course have to do it. Without really thinking about important ethical questions and big considerations.
We I think we’ve learned and continue to learn big lessons in the social media world about unintended consequences. And some people are raising those kinds of flags saying, Hey, we need to slow down. Yes. And we need to ask very deep and profound questions about, as you said, about what it means to be a human being and how this could change or affect our lives in detrimental ways.
And so that’s, obviously, as Christians, that’s one of the things we want to bring into this conversation is, Hey, we really need to ask the best questions we can, knowing that we can never know all of the implications of how these technologies are impacting our lives and that type of thing, but we at least ought to try.
And keep a framework in there. And that, that, so that leads me to this next question, which is who is from what you can see right now, this is very much an evolving thing, but who is it that you see might be monitoring slash possibly regulating the, the development and, and proliferation of ai? I listened to a podcast a couple of days ago in which one of the guys that is the CEO of one of the major companies in this environment said, oh, we’ll tell you.
Yeah. That’s what I was gonna, I, I thought you’re going to tell me when your invention is potentially going to be destructive to my life, even though you’re. Possibly making millions or billions of dollars, but I should trust you to tell me when it’s going to be bad.
[00:34:20] Dr. Katie Frugé: Yeah. Yeah. I, I, I, it’s definitely a billions I think too, on there, just to be clear, like these, we are profits for a lot of these people.
That’s one of the big concerns, to be honest. Mark I. It’s not clear who’s regulating this. The US government has come in to try to do some it took 15 years for the first congressional hearing after social media became available to the public for them to call the CEOs in and ask them questions.
Within six months of chat, GPT being available to the public, they were having them come in and give testimony. So I do think that the US government is paying attention, which is good. We’re not regulating necessarily, and there’s some tension even within. The groups right now of who, who is gonna set the, the lines and, you know, what are the boundaries gonna be?
The companies wanna self-regulate for, by and large you do have some whistleblowers who are saying, these people are bending their own rules. You know, they’ve, they say that they have these rules and then they go and bend them, and there’s no third party agency or group that is kind of the watchdog that would call you out for something like that.
And so that’s one area that I do think that we’re still early enough in the game that this is, this is an area where we need to be involved. Writing our legislators, writing our, our, our elected officials saying we want better transparency and regulations on these things, that we see the opportunity and the potential that this thing has.
But we also see the dangers and we think people need accountability. And so we’re asking, you know, this impacts all of our lives. And so we want there to be some meaningful guardrails put on this.
[00:35:48] Dr. Mark Turman: Yeah, I think that’s exactly the way it needs to be. And, and there all are, are already lawsuits When chat GBT came out.
You know, there are newspapers like the New York Times and others that immediately started filing, um mm-hmm. Lawsuits relative to copyright infringement, that type of thing, because of what these large language models could do. That’s really just the tip of the iceberg of where this is going. And, and, and not to go too far down this road because there’s, there’s just so much that we could talk about, but Katie, if you look at AI as a tool.
I’ve heard significant leaders both in the local context of Dallas, where we live and, and in other places, particularly around concerns relative to medicine and AI and warfare and ai. Do you think that those things are likely to, to accelerate to the point where like you would be seeing potential treaties between governments relative to how AI is handled and how, you know, we, you, like we have we have the international criminal court, we have, you know, international statements even that have to do with what, what, what might be defined as just war or ethical war.
If you want to get into some of those very deep conversations, do you think aspects of AI will roll up to be on a scale that global.
[00:37:14] Dr. Katie Frugé: I think there definitely is the potential to, what I’m really seeing right now is kind of our own version of a, of a space race of who is going to control ai. That’s part of the big, you know, the arguments over chips and manufacturing.
There’s a huge push. Whoever, whatever country or nation is able to achieve what they call a GI, artificial general intelligence probably will control the game by and large. And so there’s a lot of concern over, I mean, it’s a very different world if China gets it before the US does or if we Russia gets it, or, you know, pick your country.
And so I think there’s a huge interest globally to all say, Hey, can we all get on the same page here and work together? Because of the oppor, especially more vulnerable countries, smaller countries there, there could be some impact that will really kind of. Even countries that are low tech will be impacted by whoever gets to a GI first.
And so I, I, that’s a definite concern and I think that’s why you see such a race right now and push, especially on the US front to try to get there before other countries. And so that’s why you see, you know, these. Countries and these big tech people that are trying to have a prominent role within even the current administration.
There’s actually a long game to that, and that’s why they’re there trying to advocate and do these things because there’s a vested interest that have your country be the first one at, at, at this race. So it’s kind of a, it’s a hard balance too, because they’re all saying, we need to slow down, we need to be careful.
We need to know what we’re doing. But also we don’t wanna be the second one there, so we wanna be the first one there.
[00:38:46] Dr. Mark Turman: Yeah. We don’t wanna be second. We certainly don’t wanna be fifth and right. But there, you know, and, and we all know or we should know from our history, I guess I would say, say that there’s a very thin line at times between a space race and an arms race.
A very thin line and so there are real concerns there. Katie, let me see if I can bring this back a little bit round to where maybe you and I are a little bit more comfortable, which is Okay. The potential benefit of AI related to ministry, local church ministry, how pastors do their work, how children’s ministers do their work, how a Sunday school teacher goes about their lesson for the coming Sunday.
What are some of the things you’re seeing that might be exciting for us as believers that serve and participate in church life and that type of thing. And also try to be a blessing to the communities in which we live. Where do you see some real opportunity for us in this?
[00:39:43] Dr. Katie Frugé: I, I love that you asked that because there are opportunities too.
I don’t wanna seem like the sky is falling here. And so going from the, the dark tight, or the heavier side to the slider side is great. Even using your own example earlier, how Denison uses Gemini to be able to be more effective and efficient and speedy in some of the work. I think of the benefits for especially these mid to small-sized churches that maybe only have a few paid staff.
You could really see AI being able to take a tremendous level of that administrative work. Maybe you can’t afford to have a full-time admin, or you can’t afford to, you know, be able to do all these things. You’re gonna be able to see AI really even able to do it right now. But even more so in the coming years as you have this new development called agents that you can assign a task and it’s like having a digital employee for you know.
Pennies for what you would pay a, a human to do. And they’ll be more or less very effective. You can give them assignments to even plan the community outreach event. And they know they can go ahead and make the reservations, call the places, make all the, all the detail work so that you can focus on that relationship gospel building opportunities.
And so they’ll be able to help partner alongside the churches, especially churches that are resource scarce or need some extra volunteer support or things like that. I think AI’s got some great opportunity there to be able to enhance your ministry so that you can focus more on that gospel relationships that you’re trying to cultivate and grow within your church community and kind of let the AI take some of the other boring work.
None of us like doing receipts and expense reports, right? And so let’s let AI go and take that and so we can kind of focus on some of the other things. And then also you can train these LLMs, these large language models. You can I, I’ve met pastors who have their favorite theologians and their favorite resources and have built their own kind of sub chatbot and use it for sermon preparation for research to be able to do that.
And it’s all trained on the virtue system or the value system that that pastor has kind of put into it so he can trust it and know that, you know, this is, I’m getting information from, you know, BK or, you know, Bart who pick, pick your theologian. And so you can utilize that as well as a more effective research tool to be able to do preparation for, you know, writing anything like that.
I find it incredibly helpful to get through writer’s block. I’ve used that a couple of times. If I’m in the process and I just can’t get through something, I’ll share with the large language model what I’m doing and offer and, and have it offer suggestions. And I, I found it to be very helpful for that.
Different language models have different. Again, I don’t like using these words, but personalities, if you’ll, that they help. It’s not a personality, but it’s the, you know, it’s the flavor that the programmer gave it. I, we wanna say something like that. And some of ’em, I think Claude from Anthropic is just, I love the tone and the way it, it gets me better, I would guess I could say.
And, and I found it it to be a helpful aid in my writing project. Sometimes not to write for me, but to enhance what I’m doing.
[00:42:43] Dr. Mark Turman: Yeah. And that’s, that, that is a very nuanced conversation. Yes. That is developing, especially on the creative side of anything. You know, whether it’s writing a story, writing a sermon, writing a song, writing a poem, creating a drawing.
There’s a big conversation here about. The, the legitimacy or not legitimacy of AI tools. But, you know, there’s, there’s a lot of opportunity. I just, the idea as you were talking about, wait a minute, I could train my own robot to help me. Mm-hmm. And I could train it on my values. I could train it on, you know, my theologians, you know, many of us love the work of John Stot.
He, you know, you could say, Hey, AI tell me everything that John Stot thinks or wrote about this particular doctrine or topic.
[00:43:32] Dr. Katie Frugé: Exactly.
[00:43:33] Dr. Mark Turman: And I’m building my case for my sermon and that type of thing. And there’s ways that you can quote, quote, unquote, control it by training your own sub. Version of AI to help you in that way.
Which, let me I bring you around to this. So I asked Gemini, the, the large language model, Gemini. We asked it this question. We wanted to know what it would tell us about how ai, here’s the, here’s the prompt. When you’re dealing with ai, some of these models, you give it a prompt or a question.
Here’s the prompt that we gave Gemini. What cautions, if any, would you have for a Christian minister wishing to use AI in their ministry? Research, writing and crafting responses to church members? Members? It created a five page bullet point answer around four primary concerns. Let me just give them to you.
Theological concerns. Pastoral relational concerns, ethical, practical concerns, and spiritual discernment and wisdom. And then it broke it down even further. It, it could be its own presentation, and it took, it took around 30 seconds for it to produce all of that information but is enormously helpful in the best sense.
Sure. But I, I, I’m a little bit concerned. I know other people are concerned. We’ve, we’ve seen Katie a lot of stories, particularly in the last five years around plagiarism, around colleges and universities being very concerned that their students are just letting things like Gemini and other chatbots write their their papers for their classes.
What sense of of boundary would you want like church leaders to be thinking about right now? I know I was with some people from lifeway, the publishing arm of many of the churches that we deal with. They kind of have a lockdown policy when it comes to writing new material that they would publish.
When it comes to the use of ai, any thought that you have on kind of a boundary, particularly on the creative side or the ministry side, that, that Christians need to think about?
[00:45:52] Dr. Katie Frugé: Yeah, I have three kind of principles that I follow myself, and I try to hold them loosely knowing that, you know, depending on the Lord for wisdom, if I ever need to add more or less.
But this is more or less my, my guidance right now. Number one, I use AI to be a support, not to be synthetic. So that means everything originates with me, that I’m not allowing AI to generate synthetic things that aren’t authentic to what the Lord has given me, which is a brain that can think, that can create things.
And as I pray that the Lord give me wisdom as a content creator, as I’m putting out information to people I wanna make sure that I’m utilizing AI as a support. I’ve heard people talk about it as, think about it as a, you know, a, a research assistant or something like that. So you can go tell it.
Go, go get this information, help me do this, but I’m not going to let it write it for me and synthetically. Present something that’s not authentic to me. That’s bearing false witness also, at the end of the day, right? That it’s presenting something that’s not mine, that I’m claiming to be authentic to me.
So it’s gonna be a support, not a synthetic use for me. And then similar, but just kind of building on that, I, I think we should use it as a resource. Not so much as just our primary research arm. AI can hallucinate. And what that is, is it’s basically where it doesn’t know the answer, so it just makes it up and we have no idea why it does that.
I catch AI hallucinating. All the time. And so I think, at least for right now I, I don’t think it, AI should be completely utilized only for re you’ve gotta use your brain at the end of the day. Okay. That we can’t
[00:47:27] Dr. Mark Turman: So you’re telling me that AI has a pride problem?
[00:47:30] Dr. Katie Frugé: It does.
[00:47:31] Dr. Mark Turman: It never wants to, it just like human beings.
It never wants to stop, pause, and say, sorry. I don’t know.
[00:47:37] Dr. Katie Frugé: I don’t know. It doesn’t, and it’s so funny, I, I, I should send you screenshots. We could put in the show notes or something of where I’ve, I literally will just say, you just made that up, didn’t you? And it just, I’m sorry. Yes, I did. It looks, do it.
And it’s, it’s pretty humorous when you catch it doing that, but because of the possibility of hallucination and because we don’t know. Why the machines hallucinate sometimes. I just think it, you know, consider it a good resource. Let it, you know, help be a good support for the work you’re doing.
But it should not be your primary research arm, right? That there’s Okay. Books. Yeah. There’s other things out there that we wanna cross reference just because it came out of, you know, chat GPT. Don’t take it for granted that it’s the gospel truth.
[00:48:22] Dr. Mark Turman: Yeah. You need, you need to check it. Number one, you need to check it.
And just, just
[00:48:27] Dr. Katie Frugé: I would check my research assistant to make sure they did good research. You need to make sure this right. And
[00:48:32] Dr. Mark Turman: it’s, and, and, and even just, just listening to you talk about that, it just kind of blows my mind that we would say that this machine is hallucinating again, the word that’s an anthropological term, right?
Mm-hmm. We like no humans hallucinate. Animals don’t hallucinate, and computers certainly don’t hallucinate. But here we are talking about computers hallucinating.
[00:48:58] Dr. Katie Frugé: It’s the words, we’ve got it. It’s so funny. But yeah. But to those that, so support, you know, resource, and then also I just think it’s important to have your own awareness of what your boundaries and benefits are gonna be for ai.
Where do you see it being helpful and where do you see it being a hindrance to ministry and really being intentional and making those decisions and sticking to it, I think is, IM, some of times I think the evils that we do or the, the problems aren’t done intentionally. It’s stunned because we just didn’t, Nope, somebody didn’t think about it or somebody just didn’t think to ask the question.
And so even just being aware of what your boundaries are gonna be for ai and what benefits do you think that are appropriate, I think would go far.
[00:49:42] Dr. Mark Turman: Yeah. And some of those things that you mentioned at the beginning of our conversation about what does it really mean to be human, to be incarnation, um mm-hmm.
To be relational. Those are things that are distinctive to human beings. And just that idea of as a someone that’s serving others in ministry from a, a Christian motivated perspective of that idea of I could call this person or text this person, which one of these would be better? A call would probably be better.
I know that this is going on in somebody’s life. I could call them or I could go see them. Which one would be more meaningful? Probably going to see them. Yeah. And working through thought process, thought process like that, that really make a difference in the way ministry actually happens because it’s the way we help people, the way we serve people, the way we disciple people is life, on life, not machine, on machine or yes.
You know, technology to technology or technology to human being. Katie, what would you say to somebody who just bumped into you in the elevator and said, Katie, I don’t know anything. Where do I start learning about AI and how to use it?
[00:50:52] Dr. Katie Frugé: It really kind of depends on how you like consuming information, right?
So a couple of different resources that you could go to. The Washington Post has a tech brief that’s super easy to read and they just send that out regularly. If you are just wanna read a news article or something like that just to kind of get updated. I love, there’s a podcast called Hard Fork and it’s produced by the New York Times.
Those guys do a great job explaining the technology of the day in a very understandable, digestible way. They’re brilliant reporters and they understand at a very high level what’s going on with ai, but explain it in a, in a easy to follow under way. And then also I wanted to plug real quick if we wanted to go into more of a, philosophical conversation about humanity and what does it mean to be human? And how does that intersect intersect with artificial intelligence? I, I would plug the book The Artifice of Intelligence by Noreen Hertz Field and Ted Peters. I read that earlier this year, and it was a really beautiful intersection between trying to understand a Christian anthropological understanding of, of humanity and how that, how artificial intelligence is going to challenge us to think deeper about it.
[00:52:00] Dr. Mark Turman: Hmm. Yeah. And we’ll include all of those things in the show notes. And Kay, also, I mentioned, wanted to call this out earlier, but you had recommended I look at the website, AI and Faith.
[00:52:10] Dr. Katie Frugé: Oh yes. AI and Faith. Mm-hmm.
[00:52:11] Dr. Mark Turman: Talk about that website for a moment.
[00:52:14] Dr. Katie Frugé: It’s a great, it’s a, it, I believe it is interfaith, isn’t it?
It’s, it’s not just Christianity. But it’s faith, faith leaders across a whole spectrum of religions that all kind of agree saying, we believe that there’s a space for us to speak into this issue. That there’s important issues that, you know, impact all of us. And yeah, it’s a, it’s a e ecumenical group of different, right?
Mm-hmm. Faith groups all talking about different, different issues related to ai. Some of them use AI to write stories and share their stories with that. And so it’s a really great resource, especially for faith leaders who wanna see what other conversations might be happening in that, the realm of artificial intelligence and faith.
[00:52:56] Dr. Mark Turman: Yeah. And, and a, a great opportunity for us as the church, both individually and collectively. There is a role for us to be, I mean, we desire the flourishing of our communities as the Bible would teach us to do the promotion of biblical righteousness in every way. And, and a part of that is something of what I might call a prophetic role or a being the conscience of Yes of our communities.
I know part of your work has to do with speaking biblical truth into places of power, be that in government legislation, that type of thing. But that’s every Christian’s role is to say, wait a minute. What does the Bible and the Holy Spirit of God say about these things? And just because we as human beings sometimes can do something, it doesn’t mean we should do it or we should do it in a certain way.
And that’s a role in a place that we need to hold out and stake. Claim to I think this website, AI and Faith is an expression of that across a very broad understanding of faith, but is still something that can be useful to us. Katie, are there, are there one or two or even three names in this world where you say, okay, if this person’s talking about ai, I want to hear what they say?
Some people would say of course you gotta listen to Elon Musk, or you’ll hear names like Sam Altman. Yeah, the guy I was referring to earlier is the chairman, CEO, founder of Anthropic named Dorio a Modi, I believe. Is the way you say his name. Are those names or other, are there other names like, Hey, if this guy’s talking about it, I want to know what he is saying about it.
[00:54:31] Dr. Katie Frugé: I, I, I, I love all the ones you just listed on there, particularly Sam Altman is an interesting figure to follow. He and Elon Musk have crossed paths several times over the past decade or more, I would say. And even last year there was some funny back and forth with what was happening over there and some of his work at OpenAI.
Yeah, Sam Waltman is an interesting one. And again, that’s another conversation where the issue of integrity legislation and transparency he has a very large voice in that. And yeah, it’s interesting to hear what he has to say sometimes.
[00:55:05] Dr. Mark Turman: Yeah. And, and just tell you that it’s, it’s important to, to know these voices.
And as somebody who is, now, I guess in the fall season of my life at 60 years old I would just say to those listening to our podcast, get ready for a lot of young faces and voices because a lot of this technology is coming out of the generations that are behind us and that are emerging and very much our kids and certainly our grandkids.
If you’re a grandparent like I am, this is going to be a significant aspect of their world on a scale larger than what other things have have been in our world and have become you normal to our lives. Katie, I don’t, I I know we’ve gone a little bit long here, but I just wanted to see if you could give us a metaphor because I, I’ve been thinking through this, have asked people about this.
Thomas Friedman wrote in one of his books a couple of years ago that, the advent and arrival of cloud computing, which is a part of this conversation as well, which is the ability for computers to gather, store and retrieve information very rapidly on a very large scale that makes the internet possible, that makes the way you use your cell phone right now possible.
He likened that to man’s discovery of fire. So people talk about things like the Gutenberg press being in that way. They talk about the internet that way. Sometimes they talk about cell phones that way. Do you think that AI is like unto the invention of the wheel, the discovery of fire the arrival of the television or is it closer to the phenomenon?
Previously known as Y 2K. If you were trying, if you were trying to align this in some way, if you don’t know what Y 2K is, you just need to go look that up and ask AI to explain it to you. How, what kind of a frame would you put this in right now?
[00:57:07] Dr. Katie Frugé: I appreciate that and, and I do just have to give a shout out to my mom because we definitely filled our tubs with water on December 31st, 1999.
And I think she still has tubs with like grain that she was going to use for yes, making bread. So anyways, I appreciate the Y 2K reference. I honestly, I’m gonna, I’m gonna make this, I’m gonna call it fire. It’s going to have the potential to enhance and make things a lot easier for a lot of people, but it’s not gonna be without danger as well.
That there’s significant possibilities of it being used for good and for for evil. So that’s, that’s, yeah, that’s what I’m gonna go with.
[00:57:47] Dr. Mark Turman: Okay. And that’s, that’s an important idea and, and the, the way to think about it is, is that something like fire or the wheel, or even the Gutenberg press it so substantially changes the way that we experience and engage in life.
That’s that’s where those kind of hu huge pivot points come in our world and mm-hmm. And there’s a lot to think about, and a lot we just won’t know in some ways until we get further into it and, and we are going to make some mistakes. There’s no question that there’s going to be some there’ll be some bad actors, as there always is when you have tools and technologies like this.
And we don’t know what those will be, but they, there the potential of good particularly in the research of medicine and the, the diagnosis and, and treatment of diseases, that type of thing. What, what would it mean? What would we say if AI was actually. God’s gift and God’s path for overcoming cancer.
Yeah. How would you know we, and
[00:58:45] Dr. Katie Frugé: can I add another one? Yeah, sure. That is huge. Bible translations. My brother works Yeah. For a Bible translation company. And AI has been a game changer. We may be hundreds of years ahead of schedule because of some of the work that AI’s gonna be able to accomplish.
[00:59:01] Dr. Mark Turman: Yeah. You and you think about Wow. Every person in the world being able to have the Bible in their own heart. Native language. Yes. In a matter of a, a decade or two, maybe rather than a few centuries. That’s, that’s just phenomenal. Yeah. Just absolutely phenomenal. So more to come, more to watch. Yeah.
We’ll check back in with you as you learn more and we’ll just see if we can keep following this conversation in every good way. Wanna thank you Katie. Always a pleasure to get to have a talk with you about things and to be a partner with you and with the folks at Texas Baptist. Wanna thank our audience for listening with us and staying a part of this.
We’ll bring you more at Denison Forum. We’re planning a series of ongoing conversations about AI as it continues to develop, to try to equip you to be up to speed and to be a part of what God’s doing in this way. And again, thank you for being a part of our conversation with us today. If it’s been helpful, rate review us and share this with others as well, and we’ll see you next time on the Denison Forum Podcast.