X’s pro-China takedown highlights a bigger transparency gap

X’s pro-China takedown highlights a bigger transparency gap

Summary

Reporting by Crikey highlighted research from Clemson University’s Media Forensics Hub that uncovered a network of roughly 130 accounts on X posing as ordinary users in Australia, the United States and the Philippines that amplified People’s Republic of China–aligned narratives. The Australian cluster included 27 hijacked accounts mixing local posts with pro-China messaging; Crikey reported X suspended those accounts after being asked. Researchers say the operation used AI-generated text and bears similarities to the 2024 Green Cicada activity identified by CyberCX.

The piece puts this takedown in a wider enforcement context: Google’s Threat Analysis Group removed over 10,000 PRC-linked YouTube channels in Q4 2025, showing this is part of a broader, ongoing problem of coordinated inauthentic behaviour. The author argues X has made large-scale claims about interference without publishing methods or data for independent scrutiny, creating a transparency gap that undermines credibility.

Key Points

  • Clemson’s Media Forensics Hub identified about 130 accounts on X pushing PRC-aligned narratives, with 27 Australian accounts hijacked and later suspended.
  • Research indicates the network relied on AI-generated text and local-persona tactics to intervene in Western political conversations.
  • Technical and behavioural markers tie this activity to previously reported Green Cicada tactics from 2024, suggesting continuity rather than a one-off test.
  • Google TAG’s removal of over 10,000 PRC-linked YouTube channels in Q4 2025 underscores the global scale of similar operations on other platforms.
  • X reportedly did not respond to Crikey’s questions, and has not published methodology or datasets to allow independent verification of its broader interference claims.
  • The episode exposes a transparency problem: platform assertions about state-level interference need reproducible evidence to be credible.

Why should I read this?

Look — this is the kind of behind-the-scenes policing story that tells you whether platforms are serious or just shouting. If you care about online influence ops, election integrity, or whether Big Tech will actually show its homework, this is worth two minutes of your time. It shows a neat, confirmable takedown but also how much we still don’t get from platforms.

Author’s take

Punchy and plain: the takedown is real and useful, but the bigger headline is the credibility gap. Platforms making big claims about millions of accounts should publish methods and data so researchers can verify them. Without that, we’re left trusting soundbites, not science.

Context and Relevance

This episode matters because it highlights two linked trends: (1) adversaries are increasingly using AI and local-persona tactics to spread influence, and (2) platform transparency has not kept pace. That combination makes it harder for researchers, journalists and regulators to measure the true scale and origin of interference. The case also sits alongside major enforcement actions on other platforms (for example Google TAG’s large removals on YouTube), illustrating that coordinated inauthentic behaviour is a persistent, cross-platform challenge.

For policymakers and security teams, the lesson is practical: demand reproducible evidence from platforms and support independent forensic research. For journalists and researchers, the Clemson/Crikey work is a model for careful, verifiable reporting on influence operations.

Source

Source: https://aspicts.substack.com/p/xs-pro-china-takedown-highlights