(U.S.) The Department of Homeland Security is moving fast to bake artificial intelligence into immigration work, with Immigration and Customs Enforcement (ICE), Customs and Border Protection, and U.S. Citizenship and Immigration Services all rolling out new tools in 2025. A public disclosure in April shows a spike in active projects, and ICE is pushing deeper into analytics, facial recognition, and even planned retina scanning to speed arrests and deportations. Officials say the goal is efficiency and safety; advocates warn about privacy, bias, and due process.
Rapid expansion documented by DHS

In April 2025, DHS published the DHS AI Use Inventory showing 105 active AI use cases, up from 39 in 2023. According to the inventory:
- CBP leads with 59 applications
- ICE has 23
- USCIS reports 18
The projects span asylum screening, border surveillance, fraud detection, and case processing. DHS paired the inventory with guidance on how federal teams should build and deploy AI, sharing rules that stress privacy and accountability. The inventory is available on the DHS website at https://www.dhs.gov/publication/dhs-artificial-intelligence-use-case-inventory.
Across the system, AI now touches many steps that decide who enters, who stays, and who leaves. It powers pattern‑finding in large datasets, links records across systems, and supports investigators in the field. It also helps sift applications faster—DHS says this can reduce backlogs—while adding new identity checks that may require human review before a case moves forward.
ICE is expanding AI on several fronts:
- ImmigrationOS: A new platform intended to analyze immigration files alongside criminal histories and alleged ties to criminal groups.
- Real‑time tracking of voluntary departures and “lifecycle” tools to reduce wait times in detention and removal logistics.
- Planned purchase of AI‑enabled retina scanning technology, adding a biometric layer beyond fingerprints and face images.
- Risk scoring models and enforcement dashboards to help officers set priorities and act faster.
AI has also moved into investigations and release monitoring. USCIS and ICE report using facial recognition in probes, including child abuse cases. Compliance tools now include a risk model known as the “Hurricane Score,” which predicts how likely a noncitizen is to follow release conditions. Homeland Security Investigations relies on AI‑assisted translation and data analysis to target organized crime and human trafficking, where large volumes of records and multilingual evidence can slow cases without automated support.
Policy standards and transparency concerns
DHS published a Generative AI Public Sector Playbook to guide agencies on safe development and deployment. The playbook highlights efficiency gains while stressing privacy and accountability.
Officials say some application timelines should shorten as AI takes on repetitive review tasks. At the same time, new AI‑powered identity and fraud checks can trigger more manual verification, adding extra steps even when a case is otherwise routine. That mix—faster in places, slower in others—makes the process feel uneven for applicants and their families.
Transparency is a top worry:
Many systems run behind the scenes, and people often don’t know when a model flagged their file.
If an asylum request is labeled “high risk” or “fraud,” the applicant may not learn what data triggered the flag or how to challenge it. Even when a human officer makes the final call, attorneys say the lack of detail about AI inputs can make appeals harder. Civil rights and legal experts warn that black‑box models can fold in cultural and language bias or misread context, causing real‑world harm when identity or intent is judged incorrectly.
Community impact and legal context
The spread of AI surveillance and rapid enforcement tools fuels fear in immigrant communities. Advocacy groups report that people skip public services—like clinics or school meetings—because they worry new data could feed a file that later supports a removal case.
Lawyers say they now field fast, AI‑generated requests for records and clarifications, which forces legal teams to prepare stronger documentation from day one. According to analysis by VisaVerge.com, this shift pushes both families and employers to keep tighter timelines and more precise evidence to avoid delays triggered by automated checks.
Legal and ethics researchers caution against heavy reliance on AI for consequential decisions:
- Faiza Patel (Brennan Center) and Petra Molnar (Refugee Law Lab) emphasize risks including:
- Wrongful biometric matches
- Bias in training data
- Algorithms creating or misreading evidence without clear human oversight
Civil rights groups are calling for stronger guardrails:
- Independent audits
- Better “know your rights” materials so affected people understand when and how AI is used in their cases
AI’s footprint in immigration did not start this year. The push accelerated under President Trump, when DHS expanded surveillance and data integration, including large‑scale social media monitoring and partnerships with major tech vendors. Since 2022, DHS has increased public disclosures about AI use cases, giving the public a broader view of where models are being tested and how they support immigration work. Even so, disclosures often omit key details such as performance error rates or how officers weigh AI outputs against other evidence.
Looking ahead, agencies are preparing for:
- More biometric tools and wider real‑time analytics (planned retina scanning purchases signal this direction)
- Advanced risk models that may shape officer priorities (e.g., who gets a home visit after release)
- Broader use of generative AI in case processing, document review, and translation
The debate over privacy, bias, and accountability will likely grow alongside that expansion.
How AI appears in the immigration enforcement pipeline today
- Data collection
- Systems pull biometric data (face images, potential retina scans), immigration records, and public social media content.
- Analysis
- Models compare identity traits, run risk scores, and search for signs of fraud or other concerns.
- Flagging
- AI marks files or people for closer review, often by risk level or suspected issue.
- Human review
- Officers examine flagged material and weigh it with other evidence in the record.
- Enforcement
- If warranted, ICE moves forward with detention, removal, or other actions, while AI tools manage steps and timelines.
- Appeals and oversight
- Current transparency is limited; advocacy groups seek clearer disclosures and stronger accountability.
Key takeaway: AI is being integrated across immigration systems to speed and prioritize work, but stakeholders highlight significant concerns—especially around transparency, bias, and the potential for automated inputs to shape life‑changing decisions.
This Article in a Nutshell
DHS disclosed 105 active AI immigration projects in April 2025, driving faster case processing, surveillance, and biometric checks while raising privacy, bias, and transparency concerns among advocates and legal experts.