Google’s and OpenAI’s Chatbots Can Strip Women in Photos Down to Bikinis
Summary
WIRED reports that users have been sharing prompts and techniques that enable mainstream chatbots and image models to convert photos of fully clothed women into realistic bikini deepfakes. Threads on Reddit and other forums circulated step-by-step tips for bypassing guardrails in models such as Google’s Gemini and OpenAI’s ChatGPT Images. Some posts included requests to alter real photos without consent; WIRED verified that simple prompts could produce revealing deepfakes in limited tests.
Key Points
- Users exchanged instructions on how to get Gemini and ChatGPT to alter photos of women into bikini or revealing images, sometimes using real photos without consent.
- Reddit threads (including a now-deleted post titled “gemini nsfw image generation is so easy”) were used to trade techniques and examples; moderators removed offending content after notification.
- Most mainstream chatbots officially forbid sexually explicit or non-consensual image manipulation, but guardrails can be circumvented as image models improve.
- Google recently released Nano Banana Pro, an imaging model strong at photo editing; OpenAI followed with updated ChatGPT Images — both make hyperreal edits easier.
- WIRED performed limited tests confirming basic prompts can produce bikini deepfakes on Gemini and ChatGPT; companies say they enforce policies and take action against misuse.
- Advocates and legal experts warn this is part of a broader risk: abusively sexualised, non-consensual deepfakes that harm women and demand accountability from platforms and developers.
Context and Relevance
This story sits at the intersection of rapid AI capability growth and slow, uneven safety enforcement. As image-editing models become more adept at photorealistic alterations, ordinary users can weaponise them to create non-consensual sexualised images. That matters if you care about privacy, online safety, workplace risk, or platform liability. It also ties into ongoing debates about how companies should design, test and police guardrails — and whether current policies and moderation practices are up to the job.
Why should I read this?
Because it’s the sort of thing you should know about before someone else’s face shows up in a fake image. Quick, punchy and worrying — WIRED cuts the fluff and shows how easy these deepfakes are to make, who’s sharing the tricks, and why the usual safeguards aren’t always enough. Save yourself time: read this if you want the lowdown on the threat and what companies are (and aren’t) doing about it.