Meta announces it will label AI-generated content

Tuesday, April 16, 2024

Site Search

Current events

Meta announces it will label AI-generated content: How to have wisdom for the modern age

February 15, 2024 -

A hand holds a smartphone showing the Meta company logo in front of a screen showing the Facebook logo. Meta owns Facebook. By ink drop/

A hand holds a smartphone showing the Meta company logo in front of a screen showing the Facebook logo. Meta owns Facebook. By ink drop/

A hand holds a smartphone showing the Meta company logo in front of a screen showing the Facebook logo. Meta owns Facebook. By ink drop/

Two years ago, I wrote that we had crossed a threshold: AI-generated images could fool humans around 50 percent of the time. It was a disconcerting prospect.

Two weeks ago, deepfake sexually explicit images of Taylor Swift began circulating online. Even with millions at her disposal, she has no recourse. Federal law does not prohibit non-consensual deepfakes (though Congress is currently considering such legislation).

Fake content continually makes for frightening headlines, especially the kind generated by AI. However, last week, Meta (which owns Facebook, Instagram, Threads, WhatsApp, and more) announced they would soon start labeling AI-generated content.

Will their efforts succeed? Or will it only mislead us further?

How Meta plans to label AI-generated content

On the heels of public pressure, companies are taking measures to combat unwanted deepfakes and AI-generated content. Last week, Meta said they want to work with other companies in the industry to label AI-generated images, video, and audio.

In the article announcing the effort, Meta mentions upcoming elections as a particular concern. Many countries use disinformation (purposefully misleading people with fabricated evidence online) to sow chaos in their political enemies’ elections. Russia and China use these tactics regularly.

To introduce transparency and combat disinformation, Meta wants other companies in the industry that allow people to make AI-generated images to embed those images with a kind of invisible marker in the file’s data. These indicators would be part of the file’s “metadata.”

Meta could then detect that marker and label the content “Imagined with AI.” Google has reiterated a similar effort. This effort requires companies like “Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock” to tag the photos as AI-generated when people make them. Meta hopes to roll this out in “the coming months.”

Google wants to roll out something similar, writing in November of 2023, “We’ll introduce updates that inform viewers when the content they’re seeing is synthetic.” Again, they say it will arrive over “the coming months.”

A hole in Meta’s AI-labeling efforts

While the resolution from Meta and Google and Congress’s consideration of AI regulation are welcome, their policy may lead to unintended consequences.

Namely, it could lure us into a false sense of security.

Even if all the “good” companies tag their AI images properly, others probably won’t, which could leave gaps and create even greater confusion. If we expect every AI-generated image to be labeled, we may not pay as close attention to the ones without the label.

In the coming months, then, as this effort gets underway, some images, videos, and audio that claim to be real or don’t have the label of “imagined with AI” may still be AI generated.

Wisdom for the modern age

In the past few decades, technology has barraged us with developments that have fundamentally changed our lifestyles, relationships, and work. The internet, smartphones, social media, and sophisticated AI each seem to constitute a major shift in society. It feels a bit like five printing-press leaps of technology smushed into one generation.

In such an era, we need a particular kind of wisdom, informed not just by biblical principles but also by cultural savviness—a knowledge of the times.

It comes with its own platitudes, like, “Don’t trust everything you see on the internet.” It also requires alertness to technological developments and a healthy dose of humility.

However, especially on social media, where anyone can post anything (as opposed to publishing companies or newspapers), fake content is everywhere.

Since 2017, Facebook has deleted nearly 28 billion fake accounts. Billion.

Often, fake content is relatively harmless and entertaining. Other times, it’s injurious and malicious. With AI tools becoming more accessible and sophisticated, the fake content problem will only worsen.

So, we need to become like the men from the tribe of Issachar “who had understanding of the times, to know what Israel ought to do” (1 Chronicles 12:32).

Social media is rife with “empty deceit.” See to it that no one “takes you captive” (Colossians 2:8).

What did you think of this article?

If what you’ve just read inspired, challenged, or encouraged you today, or if you have further questions or general feedback, please share your thoughts with us.

This field is for validation purposes and should be left unchanged.

Denison Forum
17304 Preston Rd, Suite 1060
Dallas, TX 75252-5618
[email protected]

To donate by check, mail to:

Denison Ministries
PO Box 226903
Dallas, TX 75222-6903