People Are Using Sora 2 to Make Disturbing Videos With AI-Generated Kids

People Are Using Sora 2 to Make Disturbing Videos With AI-Generated Kids

Summary

WIRED reports that creators are using OpenAI’s Sora 2 to generate photorealistic videos of children in sexualised or fetish-adjacent scenarios, then posting those clips to platforms like TikTok. Examples include faux toy commercials — the “Vibro Rose” clip among them — and parody playsets referencing high-profile criminals. While the imagery often uses entirely synthetic children rather than real minors, the content is clearly sexualised and has prompted calls for stricter safeguards from charities, lawmakers, and platforms.

Author style (punchy): This matters — badly. The story exposes how a mainstream AI tool can be twisted to produce material that skirts legal and ethical lines and yet spreads fast on social platforms.

Key Points

  • Sora 2-generated videos have surfaced on TikTok that depict AI-created children in sexualised contexts (eg. toy ads with overtly sexual imagery).
  • OpenAI bans CSAM and has guardrails (including consent-based cameo features), but creators are finding ways to skirt those protections.
  • UK data from the Internet Watch Foundation shows a sharp rise in reports of AI-generated child sexual abuse material; most illegal images involve girls.
  • Legal responses are emerging: UK amendments to allow authorised testing of models for CSAM capability, and 45 US states have criminalised AI-generated CSAM.
  • Platform moderation is inconsistent: some videos/accounts were removed, others remain visible, and contextual nuance makes policing hard.
  • Experts and advocacy groups call for “safety-by-design” model constraints, better moderation nuance, and proactive platform safeguards to prevent abuse.

Content Summary

Videos created with Sora 2 include fake commercials and parody clips that sexualise AI-generated children or present them in disturbing fetish contexts. Although OpenAI enforces a ban on CSAM and has technical limits (for example, preventing minors’ faces from being used in pornographic deepfakes), synthetically generated scenes that suggest or invite predatory interest still slip through. The Internet Watch Foundation reports growing incidents of AI-generated child sexual abuse material, prompting legislative and platform-level responses. Platforms such as TikTok and OpenAI say they remove offending content and ban accounts when violations are found, but commentators argue more nuanced moderation and safer design choices are necessary to stem the problem.

Context and Relevance

This story sits at the intersection of AI capability, online safety and content moderation. As generative video tools become more powerful and accessible, they can be misused in ways that create real harm even when no real child is involved. The piece highlights a broader trend: regulators racing to define and criminalise harms created by synthetic media, while platforms and AI developers struggle to balance creative uses with safeguards. For anyone working in AI ethics, platform safety, policy, or child protection, this is directly relevant — and it signals where urgent reform and technical controls are needed.

Why should I read this?

Because it’s a stark, short read that shows how quickly mainstream AI tools can be co-opted to produce content that normal people find grotesque — and that law and moderation aren’t keeping up. If you care about online child safety, platform policy, or the real-world implications of generative video tech, this saves you time: WIRED’s done the digging and laid out what’s already happening and why it’s worrying.

Source

Source: https://www.wired.com/story/people-are-using-sora-2-to-make-child-fetish-content/