HHS Is Using AI Tools From Palantir to Target ‘DEI’ and ‘Gender Ideology’ in Grants

HHS Is Using AI Tools From Palantir to Target ‘DEI’ and ‘Gender Ideology’ in Grants

Summary

Since March 2025, the US Department of Health and Human Services (HHS) — specifically its Administration for Children and Families (ACF) — has been using AI tools from Palantir and the startup Credal AI to screen and audit job descriptions, grant applications and existing grants for language related to DEI (diversity, equity and inclusion) and what the administration calls “gender ideology”.

The AI systems generate flags and priorities for human review; flagged items are routed to ACF programme offices for final decisions. Palantir is the contractor tasked with listing position descriptions that may need adjusting, while Credal AI provided a GenAI platform to review grant files. Federal payment records show substantial sums to both vendors but do not explicitly say the money was for targeting DEI or gender-related language.

These AI-driven audits are an operationalisation of two executive orders issued on the administration’s first day in office: EO 14151 (ending government DEI programmes and preferential practices) and EO 14168 (defining sex as a binary biological classification and banning promotion of “gender ideology”). The moves sit alongside wider actions across agencies — from NSF and NIH funding freezes to CDC retractions and nonprofits removing DEI language — and come as Palantir expands its federal contracts, including high-profile work for ICE and other agencies.

Key Points

  • Since March 2025 ACF has deployed Palantir and Credal AI tools to flag DEI and “gender ideology” language in job descriptions, grants and grant applications.
  • Palantir is the designated contractor for identifying position descriptions for alignment with recent executive orders; Credal AI provided an AI-based grant review platform that generates initial flags for human review.
  • Flagged materials are subject to a human “final review” at ACF, but AI determines which items are escalated.
  • Payments: HHS obligations to Palantir exceeded tens of millions (and Palantir reported over $1bn in federal net payments during the administration’s first year back); Credal AI received contract payments in the region of $750,000 for its platform.
  • The AI work implements two executive orders (EO 14151 and EO 14168) that restrict DEI-related content and define gender in strictly biological terms, with broad effects across federal agencies and grantee organisations.
  • Broader consequences already reported include frozen or terminated grants across NSF/NIH, retracted CDC research, policy changes at multiple agencies, and many nonprofits removing DEI language to avoid losing funding.
  • The story raises questions about transparency, contractor oversight, potential chilling effects on research and services, and the ethics of using AI to enforce contested policy definitions.

Context and Relevance

This article matters because it shows how AI is being used not just for efficiency but to enforce political priorities across the grant-making ecosystem. Governments worldwide are experimenting with automated screening and decision-support tools — but using them to police language about identity and inclusion has profound implications for academic freedom, public-health research, civil-society services and vulnerable populations.

For anyone working in grant management, research, non-governmental organisations, or public policy, this is part of a larger trend: contractors with advanced data tools gaining influence over what gets funded, who gets hired, and which communities are recognised in public programmes. It also feeds into ongoing concerns about transparency, accountability and the limits of automating normative judgements.

Author style

Punchy: the reporting connects the dots between executive orders, contractor deployments and real-world impacts. Read the details if you care about how policy is being operationalised by private AI vendors — it’s not just a technical change, it reshapes who gets funded and who is visible in government programmes.

Why should I read this?

Because it shows how AI is being used as a blunt instrument to scrub DEI and gender‑related language from federal grants and job posts — and that affects research, services and people. If you write grants, run a charity, do public‑health research, or follow surveillance and civil‑liberties issues, this explains why your work could suddenly be flagged or defunded. It’s quick to read and worth knowing: this isn’t theoretical policy — it’s active, automated, and already reshaping funding and jobs.

Source

Source: https://www.wired.com/story/hhs-is-using-ai-tools-from-palantir-to-target-dei-and-gender-ideology-in-grants/