CBP Signs Clearview AI Deal to Use Face Recognition for ‘Tactical Targeting’
Summary
United States Customs and Border Protection (CBP) has agreed to pay about $225,000 for one year of access to Clearview AI, a face-search tool built from billions of images scraped from the internet. Access will be extended to Border Patrol’s headquarters intelligence division and the National Targeting Center for “tactical targeting” and “strategic counter-network analysis,” signalling regular, analyst-driven use rather than one-off probes.
The contract claims access to more than 60 billion publicly available images. It requires nondisclosure agreements for contractors handling biometric data but does not specify what images agents may upload, whether searches will include US citizens, or how long data and results will be retained. Civil liberties groups and some lawmakers have raised alarms about the expansion of such biometric surveillance without clear limits or transparency.
Key Points
- CBP will pay roughly $225,000 for a year of Clearview AI access, according to the contract.
- Clearview’s database is described as containing 60+ billion images scraped from public websites.
- Access is granted to Border Patrol INTEL and the National Targeting Center for “tactical targeting” and network analysis, suggesting routine intelligence use.
- The agreement lacks clear rules on what photos can be uploaded, whether US citizens can be searched, and retention periods for uploaded images and results.
- Civil liberties groups and lawmakers, including Senator Ed Markey, are pushing back and proposing bans on ICE/CBP use of face recognition.
- NIST testing shows face-search systems perform well on high-quality photos but error rates rise dramatically for uncontrolled images—sometimes exceeding 20%.
- NIST recommends using such systems investigatively (ranked candidate lists) because automatic matches can be misleading and produce false positives.
Content Summary
The article outlines the new CBP–Clearview agreement, the units authorised to use the tool, and the stated purposes (tactical targeting and counter-network analysis). It emphasises the scale of Clearview’s dataset and the contractual silence on key privacy controls: authorised uploads, searches of US persons, and data retention. The piece situates the deal within broader DHS adoption of commercial biometric tools and notes legislative and civil society responses.
Importantly, the story cites independent testing from the National Institute of Standards and Technology showing substantial accuracy problems when face-search systems process real-world images—an issue that increases the risk of false leads and wrongful targeting if results are treated as anything more than investigative starting points.
Context and Relevance
This matters because it marks another step in embedding commercial biometric surveillance into federal enforcement workflows. CBP’s adoption of Clearview could expand automated or semi-automated face-search capabilities beyond border checkpoints into wider immigration and national-security operations. For anyone interested in privacy, civil liberties, law enforcement oversight, or the governance of AI, the deal raises questions about transparency, legal safeguards, and technological limits.
Why should I read this?
Look — if you care about privacy, who’s watching public photos, or how tech gets stitched into enforcement, this is a big one. CBP quietly buying into a decades‑worth-of‑web-scraped face database affects real people and sets precedent for more surveillance. Read it to know what’s changing and why it might land on your doorstep (or someone you know).
Author’s take
Punchy and to the point: this isn’t a small procurement. It’s a practical step towards normalising powerful face-search tech inside routine intelligence work. The combination of murky safeguards and known accuracy issues makes the rollout especially worrying — this is worth watching closely and quickly.