AI Deepfakes Are Impersonating Pastors to Try to Scam Their Congregations
Summary
WIRED reports that generative AI is being used to create convincing impersonations of pastors and religious leaders, which scammers deploy to solicit donations, spread inflammatory sermons, or manipulate congregations. High-profile cases include Father Mike Schmitz and multiple churches across the US and overseas. The deepfakes appear as videos, direct messages and phone calls; platforms like TikTok have hosted many such clips until they were flagged and removed.
Key Points
- AI-generated videos and voice clones are being used to impersonate pastors to solicit money or influence congregations.
- Scams come in many forms: deepfake videos, DMs, hacked social accounts, and AI-sounding phone calls requesting fund transfers.
- Pastors with large online followings are especially vulnerable because there is abundant audiovisual material to train models.
- Platforms can monetise viral AI content, creating incentives for creators to post sensational deepfakes.
- Some churches and leaders are experimenting with AI themselves, but experts warn of mental-health risks and the potential for manipulation.
Content Summary
The piece opens with Father Mike Schmitz warning his large audience after encountering AI impersonations. Security experts such as Rachel Tobac note that pastors are easy targets because of public footage and regular livestreams that provide training data for voice and face cloning. The article documents incidents from multiple US states and abroad where pastors’ likenesses were used in scams or sensational viral clips. It also points out the broader ecosystem: platforms, creator incentives, and churches using AI tools themselves—sometimes uncritically. Observers warn that AI can reinforce users’ beliefs and even exacerbate mental-health issues tied to religious delusions.
Context and Relevance
This story sits at the intersection of AI safety, social-media policy and community security. As more public figures—especially trusted community leaders—build online followings, the risk of impersonation grows. The article illustrates how easy-to-use generative tools plus platform incentives can produce real-world harm: financial fraud, misinformation and emotional distress among congregations. It’s a case study in why platform moderation, digital literacy for communities, and technical safeguards (voice verification, two-factor approval for fund transfers) matter now.
Why should I read this?
Look, if you care about online scams, church safety or how AI is messing with trust, this is one you’ll want to skim. It shows how easily generative tools can weaponise faith and money — and why small steps (verify links, check official channels, don’t act on urgent-sounding requests) actually save real people from getting duped.
Author style
Punchy — this is timely and worrying. If you work in community leadership, platform policy, or security, treat the examples here as practical warnings: the tactics are low-effort, high-impact, and already in the wild.