HHS Is Using AI Tools From Palantir to Target ‘DEI’ and ‘Gender Ideology’ in Grants
Summary
Since March 2025 the US Department of Health and Human Services (HHS) — specifically the Administration for Children and Families (ACF) — has been using AI systems from Palantir and the startup Credal AI to scan and flag grant applications, existing grants, and job descriptions for language tied to diversity, equity and inclusion (DEI) and what the administration calls “gender ideology.” The tools generate initial flags; ACF staff carry out final reviews. Contracts and payments to Palantir and Credal AI appear in federal records, though the specific DEI-targeting work was not publicly announced by the companies or HHS.
Key Points
- HHS/ACF deployed Palantir software to identify “position descriptions that may need to be adjusted” under new executive orders targeting DEI and gender ideology.
- Credal AI provided a generative-AI platform to review grant submissions and flag potential noncompliance; flagged items are then reviewed by ACF programme staff.
- Payments: Palantir received substantial federal work in Trump’s second term (HHS payments alone exceeded $35m in some listings; Palantir earned over $1bn across agencies in the first year), and ACF paid Credal roughly $750,000 in 2025.
- The AI screening operates in the context of two January 2025 executive orders that ban federal promotion or funding of DEI-related programmes and define sex strictly as a biological binary.
- The executive actions and resulting AI reviews have rippled across agencies and the nonprofit sector — NSF, CDC, NIH and others have altered reviews, paused research, or frozen funds tied to DEI-related language.
- Palantir’s wider government work (including with ICE) and rapid growth in federal contracts highlight the company’s expanding role in enforcement and policy implementation.
Content Summary
The Wired investigation outlines how ACF’s deployed AI use cases rely on Palantir and Credal AI to automate first-pass reviews of text in job descriptions and grant documents for banned language under Executive Orders 14151 and 14168. Those executive orders seek to eliminate federal DEI programmes and insist on a strictly biological definition of sex. The AI systems generate flags and priorities for human review; the final decisions remain with ACF staff. The article places this work alongside a broader purge of DEI language across federal agencies and many nonprofits, and notes Palantir’s substantial and growing federal revenue, plus its controversial contracts with agencies such as ICE.
The piece documents concrete consequences: millions in grant funding frozen or terminated across agencies, retracted or paused research mentioning LGBTQ or trans-related terms, internal staff reassignments to purge DEI content, and thousands of nonprofits editing mission language to avoid losing federal support.
Context and Relevance
This story matters because it shows how AI is being operationalised to enforce political directives inside government — not just to speed processes, but to police language and eligibility across grants and job descriptions. It ties technical systems (Palantir/Credal AI) to policy outcomes (DEI and gender-ideology removal) and to larger trends: the use of automation to scale ideological enforcement, the growth of surveillance/analysis vendors in government, and collateral impacts on research, social services and civil-society organisations.
Author style
Punchy: the reporting connects the dots fast — contracts, executive orders, and the real-world effects on funding and services. If you care about how technology is shaping policy enforcement and who gets to decide what language or programmes survive, read the full piece.
Why should I read this?
Because it’s literally where politics meets code. The article explains, in plain terms, how AI tools are being used to scan grants and job ads for words the administration dislikes, what contractors are getting paid, and what that means for research, charities and vulnerable groups. We’ve done the digging so you don’t have to — it’s worth a quick read if you want to know how tech is being used to carry out policy.