A little-noticed judicial footnote has set off fresh debate over how artificial intelligence (AI) is used in U.S. immigration enforcement and courts, after a judge warned that AI-assisted reports and decisions may contain factual mistakes and threaten people’s privacy. The judge flagged accuracy concerns and privacy concerns after finding gaps between official accounts by immigration agents and what appeared in body camera footage, raising questions about how far technology is quietly shaping cases that can end in deportation.
The footnote and the mismatch between report and video

According to the footnote, the officer’s written description of an enforcement encounter did not fully match what the video showed.
That mismatch, the judge suggested, may reflect growing dependence on AI tools to draft reports or summarize evidence, with the risk that important details are added, omitted, or framed in a misleading way.
In immigration cases, where small factual differences can decide whether someone is removed or allowed to stay, any doubt about the truth of official records alarms both lawyers and legal scholars.
Broader pattern: AI mistakes in courts and legal filings
The warning comes as judges, immigration agencies, and lawyers across the country turn to AI systems to speed up their work. In several high-profile incidents outside immigration, federal judges have already admitted that AI use in court rulings and legal briefs led to clear errors, including fake case citations. Those cases have become examples of what can go wrong when technology is trusted too quickly, without strong human review.
The new footnote suggests similar risks now extend deep into the immigration system.
How AI is used inside immigration courts
AI tools are increasingly used to:
- Help read long transcripts
- Search past decisions
- Suggest language for rulings
Judge John P. Burns is one of the judges reported to rely on AI to assist in drafting decisions and handling heavy caseloads. But lawyers say they often have no way to know:
- When a judge has used AI in their case
- What system was used
- How it may have influenced the outcome
That lack of clarity fuels fresh accuracy concerns about whether the legal reasoning and facts in written decisions fully reflect the hearing record.
AI on the enforcement side: surveillance and data integration
Privacy advocates are equally alarmed by how immigration authorities use AI on the enforcement side. Agencies such as Immigration and Customs Enforcement (ICE) and the Department of Homeland Security (DHS) now connect many large databases, including:
- Social media activity
- Department of Motor Vehicles records
- Financial files
- Census data
These systems feed powerful algorithms that search for patterns, possible immigration violations, or links to other targets. Critics say the result is a vast surveillance web that raises serious privacy concerns and civil liberties risks, especially when people do not know how their data is collected or used.
Because these combined databases pull in information from many sources, including people who have never been charged with a crime, civil rights groups warn of overreach and mission creep. They fear that AI-driven tools may:
- Push agents toward racial profiling
- Lead to people being singled out for living in certain neighborhoods
- Target people for attending certain churches or posting in certain languages online
When AI flags someone as suspicious, the person usually has no chance to challenge the code or the data that triggered the alert, even if the alert later feeds into an arrest or removal case.
Concrete example: conflicting body camera footage
The judge’s footnote about conflicting body camera footage adds a concrete example of how these systems can affect real lives. If an officer’s account is drafted or shaped with AI assistance, and that account conflicts with video from the same encounter, defense lawyers will want to know:
- Which version the court trusts and why.
- Whether AI changed or summarized the report in a way that favored enforcement.
- Whether similar errors lurk in thousands of other cases that never receive close review.
Transparency: policy gaps and inconsistent rules
Transparency rules have not kept up with these changes. A 2025 policy memorandum from the Executive Office for Immigration Review (EOIR), the agency that runs the nation’s immigration courts, allows judges to rely on AI tools to improve efficiency in handling dockets. However, the policy:
- Does not require judges to tell parties when AI has been used to review evidence, write language, or shape legal analysis
- Stands in contrast to many courts that now demand attorneys certify any use of AI in drafted filings and check citations carefully
The EOIR policy, available through the Executive Office for Immigration Review, has drawn criticism for that gap.
This uneven transparency has started to worry due process advocates. They argue that when the government or a judge can quietly lean on AI, but immigrants and their lawyers must follow tough disclosure rules, the playing field becomes even more tilted. People in removal proceedings already face language barriers, limited access to counsel, and tight deadlines. Adding hidden algorithmic tools into the mix raises new fairness questions that courts and policymakers have not yet fully addressed.
“When the government or a judge can quietly lean on AI, but immigrants and their lawyers must follow tough disclosure rules, the playing field becomes even more tilted.”
Bias risks and triage concerns
Experts in technology and discrimination also stress the danger of built-in bias. Many AI systems learn from past data, including prior arrest records, past immigration enforcement, and older case outcomes. If those older decisions reflect racial or national-origin bias, the AI can copy and even strengthen that pattern, making the problem worse over time.
VisaVerge.com reports that researchers have repeatedly found algorithmic bias in risk assessment tools and other automated systems used in criminal justice and welfare programs, and warn that similar patterns are likely in immigration settings unless strict safeguards are in place.
The same concerns apply to AI tools now being tested for “triage” in asylum and visa processing, where algorithms may help decide which cases get fast review and which are delayed. Even if such tools are framed as neutral helpers, they can still disadvantage certain groups if the underlying data or design reflects bias. Because many immigrants come from marginalized communities, any skew in the system can translate into thousands of unfair outcomes that are hard to detect from the outside.
Policy demands from privacy and civil liberties groups
Privacy and civil liberties groups are pushing for stronger rules before AI becomes even more embedded in the immigration system. They want:
- Clear limits on how long data can be kept
- Defined rules on what agencies can share and for what purposes
- A simple way for immigrants to learn what information the government holds on them
- A straightforward process to correct mistakes, especially when those mistakes may block legal status or trigger deportation
Legal norms, reliability, and open procedures
Legal scholars point out that courts have long insisted on reliable evidence and open procedures in removal cases because the stakes are so high. The judge’s warning about AI-assisted reports and conflicting body camera footage suggests that some of those old rules may not fit well with new technology.
If a person’s fate depends on data pulled from secret systems, or on a report partly written by an algorithm, the usual methods of cross-examination and document review may not be enough to spot mistakes.
What the footnote means going forward
For now, the footnote does not change formal law, but it adds to a growing record of judicial unease. Each time a judge publicly questions AI’s role in legal decisions, it increases pressure on agencies and lawmakers to slow down and introduce clearer guardrails. Many observers expect more challenges in appeals, where lawyers may argue that unchecked AI use has violated due process rights.
How quickly those protections appear remains an open question. Immigration agencies are under political and practical pressure to:
- Move cases faster
- Cut backlogs
- Stretch limited staff
All of these factors make AI tools very appealing. At the same time, the new warning from the bench shows that speed cannot come at the cost of reliability and privacy, especially in a system that decides who can stay in the United States and who must leave.
A judicial footnote flagged discrepancies between an officer’s written report and body camera footage, suggesting AI-assisted drafting may introduce factual errors. Immigration agencies increasingly connect large databases that feed algorithms, raising privacy and bias concerns. The EOIR permits judges to use AI but does not require disclosure, prompting civil liberties groups to demand transparency, audits, and clear safeguards to protect due process in deportation and asylum cases.
