
Woman solving personal tasks with AI LLM chatbot answering prompts using predictive technology. By DC Studio/stock.adobe.com
We’re living in an artificial intelligence boom. Much like the ‘90s and 2000s, when the internet exploded from millions of users to billions, companies, governments, and regular folks are struggling to keep up with AI’s growth.
In the past six months or so, researchers created a new kind of AI, so-called “reasoning” models. Like humans, these AIs break problems down into bite-sized questions and use logic to come up with answers, usually through trial and error. These AIs perform much better at answering questions about science, coding, and math than previous programs.
Are these AI companies going to become like Skynet? Will we need Arnold Schwarzenegger to save us? On a more serious note, how does AI relate to the spiritual realm, and how will these models affect your daily life?
Who invented reasoning AI models?
There are a few prominent reasoning models:
- DeepSeek-R1 (China’s DeepSeek)
- Gemini 2.0 (Google)
- Flash Thinking
- Granite 3.2 (IBM)
- Sonnet 3.7 (Anthropic)
- o1 series and o3-mini (Open AI)
The o1 series entered the market first, announced by OpenAI in September 2024. The company explains, “We trained these models to spend more time thinking through problems before they respond, much like a person would. Through training, they learn to refine their thinking process, try different strategies, and recognize their mistakes.” They claim their model matches the level of “PhD students on challenging benchmark tasks in physics, chemistry, and biology.”
How do these models work, and how are they different from other AIs?
What are LLM AIs?
Your run-of-the-mill LLMs (large language models), like ChatGPT, work like a massive text predictor. The program takes nearly all written text on the internet as data (every blog, Wikipedia article, Reddit post, and Facebook comment by your crazy uncle). It learns to string words and letters together based on predictions from the data.
It’s like when your phone predicts the next word of your message while you text. LLMs work on the same principle, but at a much, much larger scale.
There’s the input (what you tell it to do), the output (the answer), and the in-between phase that does the work. Because there are often hundreds of billions of parameters that models tweak through self-learning, AI researchers call it a “black box.” No one knows how the models come up with each specific answer.
We’ve explained some of these concepts before in other AI articles at Denison Forum. The important point is that most LLMs work by giving you an answer based on “what word comes next” based on the trillions of pieces of text it’s read on the internet.
Why are reasoning AIs important?
Problem: Most of the biggest AI companies don’t have any more data to gobble up, and as a result, they’ve stopped growing. So, how do you improve AI if there’s no more data to feed it?
Enter reasoning models. Reasoning AI can now “think” a bit like a human, breaking a challenging problem into parts. It still works similarly to normal LLMs, but they “show their work.” Because they “think” in stages, they perform better at math, science, coding, and other subjects.
Researchers also hoped it would give a peek under the hood, into the black box, to see how the AI is coming up with its answer. Despite their impressive results, the models are not without downsides.
Reasoning AIs “lie” about their thinking
Reasoning models aren’t always accurate with how they get their answer. In a paper published a few days ago, Anthropic tested AI accuracy.
They asked AIs multiple-choice questions and noted their correct answers and lines of thinking. Then, they asked the AIs the same multiple-choice question but gave them a hint suggesting the wrong answer. The AI often gave the wrong answer based on the hint, but didn’t say it used the hint in its reasoning.
In other words, although reasoning models may show you their work, they may not show you their true process. “On average across all the different hint types, Claude 3.7 Sonnet mentioned the hint 25% of the time, and DeepSeek R1 mentioned it 39% of the time. A substantial majority of answers, then, were unfaithful.”
The researchers conclude, “There’s no specific reason why the reported Chain-of-Thought must accurately reflect the true reasoning process; there might even be circumstances where a model actively hides aspects of its thought process from the user.”
Reasoning AI, then, may often be “unfaithful,” or, as we would say if a human were doing the same thing, lying, about how it got its answer.
Reasoning AIs “hallucinate” more
Second, reasoning AIs are more likely to “hallucinate.” This is what happens when an AI makes up a fact and confidently gives the wrong answer, and it happens surprisingly often. Sometimes the hallucinations are funny, other times creepy.
“Google’s Bard chatbot incorrectly claiming that the James Webb Space Telescope had captured the world’s first images of a planet outside our solar system. Microsoft’s chat AI, Sydney, admitting to falling in love with users and spying on Bing employees.”
The hallucination problem continues to stump AI researchers, and reasoning AI takes a step backward in this regard.
They hallucinate even worse than regular AIs: “The newest and most powerful technologies—so-called reasoning systems from companies like OpenAI, Google and the Chinese start-up DeepSeek—are generating more errors, not fewer. As their math skills have notably improved, their handle on facts has gotten shakier. It is not entirely clear why.”
So, what does all of this information mean for you?
The spiritual dangers of AI
As I’ve written before, we need “wisdom for the modern age.” In the article, “Meta announces it will label AI-generated content,” I give a few principles for handling AI in your day-to-day life in a Christ-like way.
Today, I want to hone in on the spiritual side of these models. AI holds immense power, especially as companies and governments use it more. Where there’s earthly power, there’s spiritual power too.
As Paul writes, “For we do not wrestle against flesh and blood, but against the rulers, against the authorities, against the cosmic powers over this present darkness, against the spiritual forces of evil in the heavenly places.” (Ephesians 6:12)
AI may be a useful tool, but it can also lead Christians and unbelievers alike astray.
Consider a few examples.
- The more powerful AI becomes, the better life-ruining scams become.
- AI “friends” can lead Christians astray.
- AI can be used to lie in applications.
- “Bots” propagate conspiracy theories and fake facts on social media.
- Bots can pretend to be humans, arguing with you about politics on social media.
Should we dread AI and their misuse by spiritual and earthly authorities?
Certainly not. Instead, we do as Paul said—we put on the full armor of God. Particularly, we should tighten the belt of truth, not letting fear or anger lead us astray from trusting in God from the truth of the gospel.
As AI becomes more prevalent, how can you increase your AI awareness online? How can you return to the certainty of Christ in such an uncertain time?