U.S. Immigration and Customs Enforcement deployed its Mobile Fortify facial recognition app for street encounters without completing a required Privacy Impact Assessment (PIA), leaked documents obtained by 404 Media showed, prompting new demands from rights groups and a Department of Homeland Security Inspector General audit.
The handheld tool lets ICE and U.S. Customs and Border Protection agents scan faces in public to identify migrants and other individuals, expanding biometric checks beyond fixed sites and into fast-moving field stops where decisions can come quickly.
Mobile Fortify’s rollout renewed scrutiny of how agents use biometric matches during roadside and sidewalk encounters, where a single scan can shape what happens next and what proof an individual can realistically produce on the spot.
The documents described Mobile Fortify as a rapid identity-check system that queries multiple government datasets, including IDENT/HART, which has “over 270 million records” with face prints, iris scans, fingerprints, and DNA.
Agents can also search the Traveler Verification Service, which contains entry and exit photos, as well as TECS, a database that includes travel and other information such as banking, social media and license plate data.
The leaked materials also referenced USCIS citizen databases, placing immigration enforcement queries alongside data used for other federal functions and raising questions about how broadly the app can reach in practice.
Mobile Fortify does not rely on a single biometric method, the documents said, and it also collects fingerprints contactlessly, adding another layer to what rights groups describe as field-based biometric collection.
ICE’s own language in the materials said the app “does not provide the opportunity for individuals to decline or consent to the collection and use of biometric data/photograph collection,” a stance that privacy advocates said collides with expectations that people can understand and refuse sensitive data collection.
In the same documents, ICE described the operational use of matches in ways that alarmed critics, with concerns that agents may treat results as “definitive” for immigration status even when a person offers other documentation.
ICE determined no new Privacy Impact Assessment was needed despite expanded querying and other uses described in the leaked materials, according to the account of the documents, a conclusion that drew immediate criticism from civil liberties and privacy organizations.
A coalition that includes the Electronic Frontier Foundation called on DHS to halt use of the technology, release privacy analyses, and clarify facial recognition policy for field encounters.
Critics highlighted at least one incident in which ICE wrongly deemed a U.S. citizen deportable based on a biometric match, using it as an example of how errors can escalate when a scan carries outsized weight.
Rights advocates said street-level facial recognition creates distinct legal and practical pressures because it can compress fact-finding into minutes, leaving little time for verification, limited immediate access to counsel, and few options to challenge a mistaken match.
They also warned that nonconsensual scans and database mismatches can trigger wrongful detention or deportation for U.S. citizens and lawful residents, not just for migrants who lack immigration status.
Concerns about error rates formed a central part of the criticism, with opponents warning of disproportionate harms for Black individuals and other groups when facial recognition performs unevenly across populations.
Mobile Fortify’s field use also places broad discretion in the hands of individual agents, critics said, because officers can decide when to scan someone in a public encounter without any opt-out mechanism.
On February 4, 2026, Inspector General Joseph Cuffari launched an audit titled “DHS’ Security of Biometric Data and Personally Identifiable Information (PII),” initially targeting ICE and the Office of Biometric Identity Management (OBIM).
The audit examines the collection, management, sharing and security of immigration enforcement data under law and policy, bringing oversight attention not just to the scanning app but to the systems that store and move biometric data across government.
Sens. Mark Warner and Tim Kaine prompted the review with concerns raised on January 29 over mass biometrics, social media surveillance and constitutional violations, placing Mobile Fortify within a broader set of worries about surveillance and enforcement.
Lawmakers also questioned how ICE treats facial recognition outcomes in the field, particularly when a person presents documents that would normally settle questions of citizenship.
Rep. Bennie Thompson, the ranking member of the House Homeland Security Committee, reported that ICE treats Mobile Fortify matches as definitive even when individuals present proof of citizenship, sharpening concerns that the technology can override paper records.
On February 5, 2026, Sens. Ed Markey and Jeff Merkley, Reps. Pramila Jayapal, and Sens. Ron Wyden introduced the “ICE Out of My Face Act,” a bill that would ban ICE and CBP facial recognition technology and other biometrics and require data deletion.
“Without oversight, this technology is dangerous in the hands of any government, and the Trump Administration is abusing it to trample on privacy, freedom of speech, and civil liberties.”
Jeff Merkley said the statement.
“ripe for abuse.”
India McKinney, EFF federal affairs director, called it “ripe for abuse.”
The oversight push leaves unresolved how Mobile Fortify operates day to day, including what safeguards govern when agents can initiate scans, what training guides use in fast-moving encounters, and how results are audited when a match drives an enforcement decision.
The leaked documents also fed concerns about whether facial recognition use has extended beyond immigration status checks and into monitoring of public activity, with reports of targeting protesters adding to First Amendment fears.
Civil liberties groups said the pace of street encounters heightens due process risks because people can struggle to contest errors immediately, especially when officers treat a biometric result as definitive and move quickly to detention decisions.
Those warnings extend beyond immigration enforcement because biometric identification, once normalized for street stops, can affect anyone whose face appears in a government database, critics argued.
Mobile Fortify also sits within a broader DHS push for AI-driven enforcement, which has drawn controversy around other tools that rely on biometric matching.
Critics pointed to the CBP One app and what they described as flawed facial recognition technology that was biased by race and sex, arguing that similar technical and policy failures can migrate into street tools that operate with less transparency.
Opponents also argued that Mobile Fortify contributes to a “border everywhere” effect by allowing immigration-style checks far from border crossings, with the practical reach determined less by geography than by where an officer decides to activate the app.
The app’s ability to query datasets with travel history and other personal information also intensified concerns about data sharing, including access to state motor vehicle records and data broker information.
Critics said those links can create end-runs around sanctuary policies when enforcement relies on data flows rather than local cooperation, turning records collected for licensing or commercial purposes into immigration enforcement inputs.
ICE has not clarified any opt-out mechanism for people confronted in public, and consent standards for street scans remain uncertain in practice, leaving questions about how individuals can refuse biometric collection during encounters.
The dispute over Mobile Fortify has become a test of whether privacy rules and oversight can keep pace with enforcement technologies that move biometric identity checks from controlled settings into everyday public spaces, where a scan can change the course of someone’s life in seconds.
ICE Deployed Mobile Fortify Without Privacy Impact Assessment Using Facial Recognition
ICE’s deployment of the Mobile Fortify app has sparked a major privacy controversy. The tool enables real-time facial scans during street stops without individual consent. Critics argue the technology is prone to errors, particularly regarding racial bias, and bypasses due process. Following reports of wrongful citizen identification, federal lawmakers have initiated audits and introduced legislation to halt the use of biometric surveillance in the field.
