Would-be thieves use AI to impersonate Marco Rubio

Friday, July 11, 2025

Site Search
Give

The Daily Article

Would-be thieves use AI to impersonate Marco Rubio

Why AI is both a helpful tool and an existential threat

July 11, 2025 -

U.S. Secretary of State Marco Rubio gives a media briefing during the ASEAN Foreign Ministers' Meeting at the Convention Centre in Kuala Lumpur Friday, July 11, 2025. (Mandel Ngan/Pool Photo via AP)

U.S. Secretary of State Marco Rubio gives a media briefing during the ASEAN Foreign Ministers' Meeting at the Convention Centre in Kuala Lumpur Friday, July 11, 2025. (Mandel Ngan/Pool Photo via AP)

U.S. Secretary of State Marco Rubio gives a media briefing during the ASEAN Foreign Ministers' Meeting at the Convention Centre in Kuala Lumpur Friday, July 11, 2025. (Mandel Ngan/Pool Photo via AP)

Last month, an imposter created a Signal account pretending to be US Secretary of State Marco Rubio using the display name “[email protected].” The perpetrator then used AI to simulate Rubio’s voice and contacted three foreign ministers, a US governor, and a member of Congress. The actor left voicemails for some while sending invites to others to communicate through the Signal app. 

Upon learning of the scam, the State Department sent a message warning those who may have been contacted. An official claimed that the hoax was “not very sophisticated” and had been unsuccessful, but they thought it “prudent” to raise awareness just in case. 

However, this was not the first time AI has been used in an attempt to trick high-level diplomats and government representatives. A similar incident occurred in May involving Susie Wiles, President Trump’s chief of staff. While that effort was similarly fruitless, it’s only a matter of time before those behind the scams improve enough to succeed. 

As Hany Farid, a professor at the University of California at Berkley who specializes in digital forensics, warns:

You just need 15 to 20 seconds of audio of the person, which is easy in Marco Rubio’s case. You upload it to any number of services, click a button that says “I have permission to use this person’s voice,” and then you type what you want him to say.

You don’t have to be the secretary of state or a member of the president’s inner circle to become the target of these attacks. Global cybercrime—much of it fueled by innovations in AI—is projected to cost upwards of $10.5 trillion this year, and that number is only going to rise as the technology improves.

But while we are increasingly aware of the risks AI poses for crime, large parts of our society seem willing—and even excited—to welcome its use in ways that could pose an even greater risk.

AI in education

The American Federation of Teachers, the second-largest US teachers’ union, announced recently that Microsoft, OpenAI, and Anthropic have invested a combined $23 million to help create an AI training hub for educators. This is the latest example of tech companies attempting to make inroads into schools and universities to help teachers and students learn how to use—and become dependent on—AI to augment their studies.

Chris Lehane, Open AI’s chief global affairs officer, hopes that AI will eventually join reading, writing, and arithmetic as a core skill everyone must learn. And, as scary as that sounds, there is something to the idea that learning how to use AI well is important given the costs of using it poorly.

For all the advances the industry has made, hallucinations and lies are still an unavoidable part of the technology. A recent study by law school professors found that AI tools made “significant” errors that posed an “unacceptable risk of harm” when asked to summarize a law casebook. 

Moreover, Microsoft found that using AI chatbots to research and write could hinder critical thinking. That one of the creators of these artificial intelligence models would help to publicize such a conclusion is notable considering such tasks are how an increasing number of people, both in the classroom and outside of it, use the technology. 

And that risk to critical thinking is, in my estimation, the greatest threat AI poses.  

A generational threat?

Aaron MacLean, a senior fellow at the Hudson Institute, cautions, “The substitution of Large Language Models for genuine thinking is a generational threat. At stake is no less than the life of the mind.”

While that sentiment is perhaps a bit exaggerated, he makes a powerful argument for why the small, everyday ways in which AI has become a staple of people’s lives could have dramatic and devastating effects on people’s ability to reason and interact with their environment in the future. 

To illustrate his point, MacLean recounts a time during his freshman year of college when a classmate told their professor, “I know what I think, I just can’t get the words down on the page,” to which the professor responded, “Well, you don’t actually know what you think, then. The act of writing the thing is the same thing as the thinking of it. If you can’t write it, you haven’t actually thought it.”

Now, you have to have a thought before you can write it down, but the professor’s point was that there is something in the struggle of taking ideas and learning to convey them in a way that makes sense that is instrumental to developing our ability to think and reason well. Taking disparate thoughts and turning them into a coherent argument requires a mastery of information that goes beyond the simple possession of data. 

AI makes it possible to get to the answer—or at least something approximating it—without having to do the work, and that’s a problem.

The person God created you to be

Ultimately, for all its downsides, AI can be a helpful tool. It excels at accumulating information, though it’s far less trustworthy when it comes to knowing what to do with it. Moreover, there are a number of questions that just need a simple answer, and relying on AI for those—with the caveat that you check its sources—is fine. 

But, increasingly, that’s not how it’s used. 

It shouldn’t come as a surprise that people would be enticed to take the easier path. And that’s especially true when, as is the case in many circumstances, the final product can be just as good or better than what we could do on our own. 

ChatGPT is going to write a better paper than most college freshmen. It may even create a better presentation or write better emails than many professionals. 

What it cannot replicate are the unique thoughts and Holy Spirit-given insights that God will only give to people. Nor can it help you learn to hone and develop skills that the Lord may want to use to advance his kingdom in the future.

Even Jesus had to grow “in sophia”—the Greek word for “the art of using wisdom”—as part of the Father’s will for his life (Luke 2:52). If that was true of the incarnate God, it is most certainly true for each of us as well. 

However, that process requires that we place a higher value on the people we will become by committing to the work than on the chance to finish the work quickly. And that is a difficult ask when we face a seemingly endless list of demands on our time and attention. 

So, when you are next forced to make that choice, what will you do?

Again, AI has its place, and the Lord can use it to help facilitate his calling in our lives. But it must remain a tool and nothing more, or we risk becoming more reliant on artificial intelligence than on our God-given intelligence. 

That is a line we cannot afford to cross, but also a line that will continue to blur as AI gets smarter and the masses who become overly reliant on it go in the opposite direction.

So please don’t settle for the person it’s easy to be rather than the person God created you to be. He has gifted and called you to something greater than that. 

Will you commit to that calling today? 

Quote of the day:

“I do not feel obliged to believe that the same God who has endowed us with senses, reason, and intellect has intended us to forgo their use and by some other means to give us knowledge which we can attain by them.” —Galileo

Our latest website resources:

What did you think of this article?

If what you’ve just read inspired, challenged, or encouraged you today, or if you have further questions or general feedback, please share your thoughts with us.

Name(Required)
This field is for validation purposes and should be left unchanged.

Denison Forum
17304 Preston Rd, Suite 1060
Dallas, TX 75252-5618
[email protected]
214-705-3710


To donate by check, mail to:

Denison Ministries
PO Box 226903
Dallas, TX 75222-6903