How AI Companies Got Caught Up in US Military Efforts
Summary
In this excerpt from Nick Srnicek’s Silicon Empires, Wired explains how leading AI labs — including OpenAI, Anthropic, Google and Meta — shifted from bans on military uses of their models to active cooperation with US defence agencies between 2024 and 2025. The change is traced to the huge costs of building large models, the appeal of long-term, well-funded defence contracts, and a broader geopolitical turn that has broken the old “Silicon Valley Consensus” favouring globalisation and light regulation.
Srnicek argues that the tech–state relationship has reconfigured: Big Tech and a rising techno‑nationalist right now compete over how technology should serve national security. Venture capital has followed, startups courting the Pentagon and reshaping the military industrial complex around agile private firms and cloud infrastructure. The result is a rapid normalisation of military use of AI and a fracturing of prior liberal, globalist tech orthodoxies.
Key Points
- Top AI labs lifted or relaxed bans on military uses in 2024–25 and began partnerships with Pentagon contractors and defence startups.
- Defence funding offers patient, large-scale money and blurred success metrics that make it attractive for costly AI development.
- The longstanding Silicon Valley Consensus (globalisation, light regulation) has been undermined by geopolitics and techno‑nationalism.
- A new tech right and defence‑oriented startups are pushing a state‑tech complex centred on national security rather than global markets.
- Big Tech is increasingly embedded in the national security ecosystem via contracts, cloud services, personnel movement and infrastructural dependence.
- The US–China strategic rivalry has been weaponised in rhetoric and policy, accelerating the shift toward techno‑nationalist priorities.
Context and Relevance
This piece matters for anyone following AI governance, procurement, and geopolitics. It connects corporate strategy (fundraising, partnerships, personnel) with state incentives (defence budgets, export controls) and shows how financial and strategic pressures can quickly reshape ethical stances. The trends described map directly onto ongoing debates about regulation, responsible AI, export controls, and the role of private firms in conflict.
Why should I read this?
Short answer: because this is where the action is. If you care about who builds AI, who pays for it, or how it might be used in conflict, this excerpt gives a clear, punchy timeline and a decent explanation of why once‑reluctant firms suddenly looked to the Pentagon. It’s a quick way to get up to speed on why ethical positions shifted and what that might mean for policy and industry moves next.
Source
Source: https://www.wired.com/story/book-excerpt-silicon-empires-nick-srnicek/