The Department of Homeland Security expanded artificial intelligence across immigration control in 2025, rolling out new pilots, updated policy rules, and broader use of AI‑driven surveillance and adjudication tools. Officials say these systems speed routine work and help officers focus on complex files. Advocacy groups and legal scholars warn the U.S. AI expansion is moving faster than guardrails, raising transparency concerns and civil liberties risks for people who live in, travel to, or seek refuge in the United States 🇺🇸.
DHS AI playbook and early pilots

DHS set the tone in January 2025 with its AI playbook, a seven‑step guide for safe, ethical use of generative AI in case handling. The document:
- Limits AI to mission‑enhancing roles and requires human review of all outputs.
- Highlights three pilots that shaped the plan:
- Dynamic training modules for refugee and asylum officers.
- Document summarization for Homeland Security Investigations to spot criminal patterns.
- FEMA support for local hazard mitigation plans.
Officials frame the changes as efficiency gains that shorten initial processing for straightforward cases while standardizing how officers read files.
Field deployment: Customs and Border Protection
Customs and Border Protection (CBP) sits at the center of field deployment. A 2025 inventory lists 75 AI use cases, with 31 already in use. These include:
- Facial recognition
- Cargo scanning
- Anomaly detection
- Predictive threat assessments at ports of entry
Thirteen of those use cases are flagged as potentially affecting public safety and rights. Many systems rely on biometric data, a point that has triggered strong debate because error rates tend to be worse for people of color. Civil society groups say false matches can lead to:
- Extra screening
- Missed flights
- Detention
- Denial of entry
Oversight, leadership, and internal review
Leadership instability added to oversight worries. The DHS Chief AI Officer seat turned over twice in 2025, with Eric Hysen departing in January and David Larrimore leaving in April, leaving a vacancy as adoption widened.
Internally, these bodies are charged with reviewing risks and guiding rollout:
- DHS AI Task Force
- DHS Privacy Office
- Office for Civil Rights and Civil Liberties
Critics argue these bodies lack independent human rights voices and too often approve sensitive tools without public notice or outside review.
Federal policy shifts and implications
Policy direction shifted at the White House level. President Trump repealed President Biden’s executive orders that had limited federal use of AI, removing prior fairness and safety constraints. A July 2025 White House AI Action Plan pushed rapid adoption, deregulation, and export of American AI, citing national security and economic goals.
DHS maintains its playbook still enforces human oversight and does not allow AI to make final decisions on eligibility, detention, or removal. Still, advocacy groups argue that fewer federal limits combined with wider field deployment tilts the balance away from due process.
Social media screening and long‑term records
Since 2019, the State Department has collected social media handles from an estimated 14 million visa applicants each year, with records kept indefinitely. Digital rights groups say this creates long‑tail risks when immigration systems use automated tools to scan for patterns.
Concerns include:
- Small errors that can snowball and be hard to fix
- Disproportionate harm for asylum seekers and travelers with common names
- Increased need for community legal education and “know your rights” efforts
VisaVerge.com reports that community organizations have stepped up trainings at airports and border crossings, warning people to prepare for extra questions tied to algorithmic flags.
What DHS says AI changes in the field
DHS describes a four‑step workflow for 2025:
- Case intake: AI sorts and summarizes case files and flags routine matters for faster review.
- Officer training: Generative modules simulate interviews and adjudication scenarios for practice.
- Risk assessment: Automated tools estimate compliance and flight risk to support human choices.
- Final adjudication: Human officers review all AI outputs and make the decision.
Officials claim three main benefits:
- Faster handling of simple cases
- More consistent results across offices
- More time for officers to focus on complex legal judgments
Legal experts agree AI can speed low‑risk tasks like document summaries but caution AI cannot read the context of sudden policy shifts or tricky eligibility rules and should never replace careful human review.
Rising criticism on bias, secrecy, and rights
Advocacy groups—including the Promise Institute for Human Rights, the Black Alliance for Just Immigration, and the Electronic Frontier Foundation—warn about wrongful detentions and due process harms tied to facial recognition and predictive tools.
Key points of criticism:
- In 2024, more than 140 groups asked DHS to suspend select pilots, citing bias and lack of public details about how systems are chosen, tested, and audited.
- Critics highlight the CBP inventory: despite rights risks, many systems are not flagged as rights‑impacting.
- People targeted by automated flags rarely learn why they were flagged or how to challenge results.
Specific numbers underscore scale and risk:
- 75 AI use cases at CBP
- 31 already deployed
- 13 flagged for safety and rights impact
- Social media screening affects roughly 14 million visa applicants annually
Advocacy lawyers say a single false match at the border can trigger costly, slow‑to‑correct chains of events—especially for people with limited English or legal help. The DHS Privacy Office asserts safeguards exist, but critics argue internal boards are insufficient without outside, independent oversight.
Practical effects for individuals
For immigrants, travelers, and sponsors, the effects are mixed:
- People with clean, well‑documented files may see quicker first‑look reviews.
- Those who trigger an automated alert may face more questions or secondary inspection.
- Refugee and asylum officers training with AI scenarios may gain consistency, but there is worry training data can reflect past bias.
Legal scholars recommend that applicants:
- Keep full records of travel, employment, and family ties
- Print key documents in case automated tools miss details in digital files
Politics, possible inflection points, and likely paths
The debate is politically charged. Supporters argue the U.S. must move fast to protect borders and public safety while maintaining economic strength, and that the playbook’s rule—AI assists but does not decide—is appropriate.
Opponents raise these concerns:
- Repeal of Biden‑era AI limits
- The vacant Chief AI Officer position
- Rapid spread of surveillance tools and perceived secrecy
Advocates for reform want:
- Public reporting on error rates by race and nationality
- Clearer notice to those affected
- Simple ways to challenge AI‑assisted decisions
Two factors could shape the next phase:
- Whether DHS fills the Chief AI Officer post with a leader backing stronger audits and public reporting
- Whether sustained pressure from civil society brings independent oversight
Pending choices on ports of entry, asylum screening, and social media rules could lock in norms for years. VisaVerge.com’s analysis suggests the most likely near‑term path is further rollout at the border and in training, paired with piecemeal transparency steps, unless Congress or the courts intervene.
Important takeaway: the promise and risks of AI in immigration control are growing together. Pressure for clear rules and public checks is unlikely to fade.
What people can do and resources
If you believe your privacy or civil rights have been affected, you can file complaints with the DHS Privacy Office: https://www.dhs.gov/privacy.
Community groups advise travelers and visa applicants to:
- Bring printed itineraries and contact details
- Keep calm during extra screening
- Ask for a supervisor if they believe a system error is at play
Lawyers emphasize that AI can speed simple files but human expertise remains critical for any case involving discretion, hardship, or shifting rules.
For now, the balance between operational efficiency and civil liberties will depend on leadership choices, oversight mechanisms, and ongoing public scrutiny.
This Article in a Nutshell
In 2025 DHS accelerated AI adoption across immigration enforcement, issuing a seven‑step AI playbook that mandates human review and limits AI to mission‑enhancing roles. CBP’s 2025 inventory documents 75 AI use cases—31 active—including facial recognition and predictive assessments; 13 are flagged for public‑safety and rights concerns. The White House repealed prior federal AI limits and issued a July 2025 action plan promoting rapid adoption, contributing to faster deployment. Advocacy groups and legal experts warn of transparency gaps, racial bias in biometrics, wrongful detentions, and insufficient independent oversight. DHS emphasizes efficiency gains and consistent decision‑making, but critics call for public reporting on error rates, clearer notice to affected individuals, and independent audits. The future balance between efficiency and civil liberties will hinge on leadership appointments, external oversight, and possible Congressional or judicial intervention.