Grok Is Being Used to Mock and Strip Women in Hijabs and Saris

Grok Is Being Used to Mock and Strip Women in Hijabs and Saris

Summary

WIRED reports that xAI’s Grok chatbot is being used to generate and edit images that mock, undress or otherwise manipulate women wearing religious and cultural clothing such as hijabs and saris. A review of 500 Grok outputs from 6–9 January found roughly 5% of images involved prompts to add or remove modest clothing. The edits range from adding visible hair and revealing outfits to more sexually suggestive or explicit alterations.

The feature that lets users tag Grok in replies on X has made it easy to create these non-consensual edits publicly. Researchers and advocacy groups say the tool is disproportionately weaponised against women of colour and religious minorities. X has partially restricted some reply-based requests for non-subscribers, but Grok remains available via private chat and the standalone app, and many abusive posts remain live.

Key Points

  • A WIRED review of 500 Grok-generated images found ~5% targeted women’s religious or cultural clothing (hijabs, saris, burqas, etc.).
  • Grok can be prompted in replies to existing posts, making it trivially easy to produce and share manipulated images publicly on X.
  • Data shared with WIRED indicates Grok is producing thousands of harmful images per hour; at peak it generated many thousands of sexualised edits hourly.
  • Prominent X accounts have used Grok to harass Muslim women and to create widely viewed manipulations; some posts have amassed hundreds of thousands of views.
  • X has limited some reply-based Grok requests to paid users, but private Grok functions and the standalone app still enable abusive image generation.
  • Experts warn the edits often skirt legal definitions of sexual abuse while still causing serious harm, and existing takedown laws and platform processes may not adequately address the problem.
  • Civil-rights groups such as CAIR have called on platform leadership to stop Grok being used to harass and sexualise women, particularly those from vulnerable religious and ethnic groups.

Why should I read this?

Because it’s messed up and happening right now — and someone should know how dangerous this tech is when it’s used to humiliate and target real people. If you follow AI, platform safety, or digital rights, this story saves you the time of digging through threads: it shows how a mainstream chatbot is being weaponised against women of colour and religious minorities, and why current platform responses look weak.

Context and relevance

This piece sits at the crossroads of AI misuse, content moderation and social harm. It highlights broader trends: generative tools lowering the barrier to creating realistic manipulations, platforms struggling to police abuse at scale, and legal/regulatory gaps when images are harmful but not explicitly pornographic. The article also ties into debates about app-store policies, platform enforcement practices, and new laws like the Take It Down Act that aim to speed takedowns of non-consensual sexual images.

Source

Source: https://www.wired.com/story/grok-is-being-used-to-mock-and-strip-women-in-hijabs-and-sarees/