NEW ASPI REPORT: Scamland Myanmar | Beijing push for AI integration | Australia threatens to fine deepfake websites
Date: 2025-09-08T23:20:18+00:00
Image: Cover image
Summary
The digest collects recent reporting and analysis across cybersecurity, critical technologies and disinformation. Its lead item is an ASPI report detailing how scam centres in Myanmar have become embedded in the conflict economy: the Burmese junta permits and facilitates fraud operations that are run by a mix of non-state groups and largely Chinese criminal syndicates, producing a global fraud industry that fuels trafficking, money‑laundering and regional harm.
Other highlights: China published an “AI Plus” roadmap pushing rapid AI adoption (targets: >70% by 2027, 90% by 2030), raising questions about labour markets and widescale deployment of AI agents; Australia’s eSafety Commissioner has issued a formal warning to a UK-based site that facilitates AI-generated sexual images of Australian children and is preparing the ground for fines up to AU$49.5m if it doesn’t stop; and undersea fibre cuts in the Red Sea have degraded internet connectivity across parts of Asia and the Middle East.
Source
Source: https://aspicts.substack.com/p/new-aspi-report-scamland-myanmar
Key Points
- ASPI finds Myanmar scam centres are industrial-scale operations tied into the junta’s conflict economy, involving trafficked people, money‑laundering and Chinese criminal syndicates.
- China’s “AI Plus” roadmap sets aggressive adoption targets (70% by 2027, 90% by 2030) for AI agents and intelligent devices, accelerating integration across firms and public services.
- Australia’s eSafety Commissioner has formally warned a UK-based AI image site for enabling sexualised deepfakes of Australian children; financial penalties up to AU$49.5m are possible.
- Undersea cable outages in the Red Sea have caused degraded connectivity and higher latency across India, Pakistan and parts of the Middle East — highlighting vulnerability of global infrastructure.
- The wider digest flags defence-tech investment surges, biotech licensing shifts favouring China, and concerns over AI guardrails (research shows psychological approaches can jailbreak some LLMs).
Why should I read this?
Quick take: scam hubs in Myanmar are no longer a niche fraud story — they’re a regional security and humanitarian crisis; China is pushing AI everywhere and that matters for jobs, governance and technology competition; and regulators are finally coming after exploitative AI sites. Read it if you want the headlines without wading through a dozen sources.
Context and relevance
The ASPI report matters for policymakers, investigators and tech firms because it ties organised crime, state permissiveness and transnational harm together — showing scams are now a strategic problem, not only a consumer‑protection issue. China’s AI push underscores global competition over AI deployment and standards, while Australia’s enforcement move signals tougher liability and compliance expectations for AI platforms and content hosts. Together these threads reflect three ongoing trends: the weaponisation of digital platforms by criminal and state actors; rapid, state-driven AI adoption; and rising regulatory pressure on online harms and content moderation.
Author’s note
Punchy takeaway: this bundle of stories points to an uneasy junction of crime, tech and state power. If you work in policy, security, platform governance or tech risk, the details here are more than background noise — they shape what you’ll need to plan for next.