AI Slop Is Ruining Reddit for Everyone
Summary
Volunteer moderators and long-time Reddit users are struggling with a surge of AI-generated and AI-polished posts across many large subreddits. Communities built on real, messy human stories — such as r/AmItheAsshole and its offshoots — are seeing rule-breaking or suspicious content that often follows predictable templates. Moderators say the influx began after ChatGPT’s 2022 launch and has accelerated into 2025, creating extra work and eroding trust between users.
The article outlines how moderators try to spot AI via stylistic “tells,” why detection is unreliable, and how the cycle of AI training on scraped Reddit content can make human posts look more like machine-written text. It also covers harmful uses: targeted rage-bait aimed at minorities, automated disinformation in political subreddits, and small-scale monetisation schemes that exploit Reddit karma. Reddit says it removes millions of spam and manipulated posts, but volunteers still face an uphill battle.
Key Points
- Moderators report a noticeable increase in AI-created or AI-edited posts across many popular subreddits since late 2022.
- There are no reliable, foolproof tools to detect AI text; moderators rely on intuition and stylistic “tells.”
- The AI feedback loop—where models are trained on scraped Reddit content—blurs the line between human and machine writing.
- Some AI content is used to provoke hatred or target vulnerable groups, especially in relationship- and news-focused communities.
- AI makes it cheap to mass-produce disinformation and astroturfing, increasing the moderation load and risk of manipulation.
- People can monetise activity (via karma, contributor programmes or selling accounts), creating incentives to flood Reddit with low-effort AI content.
- Volunteer moderators bear most of the cost: it takes far more effort to evaluate plausible AI content than to generate it.
Why should I read this?
If you spend time on Reddit or care about online communities, this is worth five minutes — it explains why the feed feels faker and angrier, who’s cleaning up the mess (mostly unpaid volunteers), and what that means for trust. Short version: people are tired, the platform’s vibe is changing, and it’s not just nostalgia talking.
Context and Relevance
This piece is important for anyone interested in platform health, content moderation, online radicalisation, or digital policy. It sits at the intersection of AI ethics, copyright disputes (Reddit has sued AI companies for scraping), and community governance. The article highlights a broader trend: as generative AI becomes ubiquitous, platforms and moderators must adapt or risk losing the qualities that made these communities valuable.
Author style
Punchy: The reporting is direct and focused — this isn’t a niche grumble about change, it’s a clear alert that AI is already reshaping everyday conversation on a major social site. If you work in moderation, platform design, policy, or community management, the details matter.
Source
Source: https://www.wired.com/story/ai-slop-is-ruining-reddit-for-everyone/