Anthropic Teams Up With Its Rivals to Keep AI From Hacking Everything

Anthropic Teams Up With Its Rivals to Keep AI From Hacking Everything

Author style

Punchy: This is a high-stakes, industry-wide preflight — big players are pooling access to a potent new Claude model to see what breaks before the bad actors do. If you care about digital infrastructure, read the detail.

Summary

Anthropic has publicly launched the Claude Mythos Preview model and convened Project Glasswing, an industry consortium that includes Microsoft, Google, Apple, Amazon Web Services, Nvidia, the Linux Foundation, Cisco, Broadcom and more than 40 other organisations. Partners have private access to Mythos Preview so they can test it on their systems, uncover vulnerabilities, and patch exploit chains before the model (or similar capabilities) is more widely available.

Anthropic says Mythos Preview—trained to be strong at coding—has demonstrated substantial cyber capabilities as a by-product: vulnerability discovery, exploit development, penetration testing, endpoint assessment, configuration hunting and binary analysis without source code. The company is using a staggered, controlled release to encourage coordinated disclosure and to give platform maintainers time to mitigate risks. Anthropic and partners emphasise urgency: they expect broadly available models with similar power within months and warn that current security paradigms may be upended.

Key Points

  • Anthropic announced Claude Mythos Preview and launched Project Glasswing, a cross-industry cybersecurity collaboration.
  • Project Glasswing members include major platform and infrastructure companies (Microsoft, Google, Apple, AWS, Nvidia, Cisco, Broadcom, Linux Foundation, etc.).
  • Mythos Preview, while trained for code, can find and produce exploit chains and proofs of concept, and can analyse binaries without source code.
  • Anthropic reports the model has already surfaced thousands of critical vulnerabilities, including long-standing, overlooked bugs.
  • Release is staggered and private-first to allow coordinated vulnerability disclosure and give vendors time to patch.
  • Industry partners frame the model as both a defensive tool and a risk: it can accelerate defenders but also lower the barrier for attackers.
  • Anthropic stresses Project Glasswing must scale beyond a handful of firms to be effective; the group aims to identify unanswered questions and solutions for an AI-driven security landscape.

Context and relevance

This matters because increasingly capable AI models change the economics and pace of cyber operations. Tools that can automatically find and chain vulnerabilities will reshape both offensive and defensive security: attacks that were once expensive or specialised may become routine, while defenders gain new large-scale scanning and remediation capabilities.

Project Glasswing is notable for the unusually broad cooperation between rivals and infrastructure maintainers. That coordination is a practical step toward industry-level mitigation strategies and mirrors broader debates about responsible release, dual-use risk, and the governance of powerful AI systems. For organisations responsible for critical infrastructure, software supply chains, or platform security, the initiative foreshadows urgent operational and policy work in the months ahead.

Why should I read this?

Look — if you run systems, write code, or care that the internet doesn’t get looted by automated exploit factories, you should read this. It explains who’s getting early access to a model that can practically play both hero and villain, why Anthropic is trying to manage the release, and what it means for how fast attackers and defenders could move. Short version: this is where cybersecurity meets AI acceleration, and it’s moving fast.

Source

Source: https://www.wired.com/story/anthropic-mythos-preview-project-glasswing/