Lawyers Launch Aimici to Track Immigration AI as Use Cases Rise 37%

Lawyers launch AIMICI to track AI in immigration as U.S. and Canadian agencies expand automated tools while reducing oversight and transparency for applicants.

Lawyers Launch Aimici to Track Immigration AI as Use Cases Rise 37%
Key Takeaways
โ†’Immigration experts launched AIMICI to monitor government use of automated decision-making and AI tools.
โ†’U.S. Department of Homeland Security expanded AI projects by 37% while significantly reducing civil rights oversight staffing.
โ†’Lawyers warn that automated triage and classification create a ‘low rights environment’ with limited transparency for applicants.

(CANADA) โ€” Immigration lawyers and academics launched a nonprofit this week to track how governments use automated tools and generative AI in immigration decisions, arguing that secrecy around the systems leaves applicants with little way to challenge errors.

The group, called the AI Monitor for Immigration in Canada and Internationally (AIMICI), said it formed to push for transparency about automated decision-making in pathways where families and employers often get rapid outcomes but limited explanations.

Lawyers Launch Aimici to Track Immigration AI as Use Cases Rise 37%
Lawyers Launch Aimici to Track Immigration AI as Use Cases Rise 37%

AIMICI officially coalesced in late 2025 and drew attention with a major advocacy push reported on February 27, 2026, as debates over AI in public services sharpened on both sides of the border.

Will Tao, an immigration lawyer at Heron Law Offices, helped found the organization alongside Zeynab Ziaie Moayyed and Karina Juma, AIMICI said.

Supporters describe immigration as a uniquely high-stakes setting for automation because agencies process large volumes quickly and, in some streams, applicants have limited rights to appeal or challenge how evidence was weighed.

AIMICIโ€™s launch centers on Canada, where lawyers have criticized an Immigration, Refugees and Citizenship Canada tool called Chinook, which extracts data and summarizes applications.

Lawyers argue that Chinook can produce โ€œinhumanโ€ and โ€œcontradictoryโ€ reasons for refusal, including in the case Ocran v. The Minister of Citizenship and Immigration.

AIMICI also frames the issue as a โ€œhigh volume, low rights environment,โ€ where, it says, applicants lack the ability to challenge algorithmic bias effectively.

AI and Oversight Snapshots Cited in Recent Immigration-AI Debate (midโ€‘2025 to Feb 2026)
200+
Active DHS AI use cases
Feb 12, 2026 inventory
+37%
Growth rate since July 2025
As cited in inventory update
~146
Baseline AI use cases
July 2025 snapshot cited in comparisons
~300
Oversight positions eliminated across DHS entities
Mar 21, 2025 RIF reference
2.1M
Pending IRCC cases (backlog pressure)
Late 2025 figure referenced in debate draft

The Canadian push comes as the United States reports a sharp expansion in AI projects across the Department of Homeland Security, alongside changes that reduced staffing in offices that historically handled civil-rights complaints and independent ombuds functions.

โ†’ Analyst Note
If you receive a refusal or RFE that appears templated or contradictory, preserve the full notice and submission record, then consider requesting underlying records (FOIA in the U.S.; ATIP in Canada) to understand what evidence or flags drove the decision.

DHS announced a โ€œreduction in forceโ€ on March 21, 2025, targeting three oversight offices: the Office for Civil Rights and Civil Liberties (CRCL), the Office of the Immigration Detention Ombudsman, and the CIS Ombudsman.

Tricia McLaughlin, a DHS spokesperson, defended the cuts in comments released that day. โ€œThese offices have obstructed immigration enforcement by adding bureaucratic hurdles and undermining the Departmentโ€™s mission. Rather than supporting law enforcement efforts, they often function as internal adversaries that slow down operations,โ€ McLaughlin said.

โ€œDHS remains committed to civil rights protections but must streamline oversight to remove roadblocks to enforcement. These reductions ensure taxpayer dollars support the Department’s core mission: border security and immigration enforcement,โ€ she added.

The March 2025 reduction in force eliminated roughly 300 positions across DHS oversight agencies, including nearly the entire staff of the Civil Rights and Civil Liberties office.

DHS also published an updated โ€œSimplified AI Use Case Inventoryโ€ on February 12, 2026, showing over 200 active AI projects, a 37% increase in AI use cases since July 2025.

The update signaled faster adoption across DHS components at a moment when outside groups and lawyers increasingly focus on how automated triage, classification, and screening influence case outcomes.

Within USCIS, the inventory referenced tools that fit into the early stages of adjudication work rather than the final signature on a decision, including the USCIS Evidence Classifier within the Electronic Information System (ELIS).

DHS described the Evidence Classifier as an AI-driven tool used to tag and categorize โ€œhigh-volume, high-impact evidence typesโ€ to reduce human review time.

The inventory also listed PAiTH (Private AI Tech Hub), an internal USCIS AI workforce assistant (DHS-2599) deployed to assist in legal research and contract acquisition.

Other automated tools under DHS scrutiny include ATLAS, used for screening, and Mobile Fortify, described as facial recognition.

AIMICIโ€™s founders and U.S. transparency advocates argue that the most consequential effects of automation often appear upstream, when a system sorts documents, flags inconsistencies, or shapes the order in which an officer sees a file.

Growth in AI adoption typically expands automated triage, pattern detection, and document sorting in case processing, which can speed throughput while making it harder for applicants to understand what triggered an adverse step.

In the U.S. context, reductions in oversight staffing can affect how quickly complaints get handled and how easily an individual can escalate suspected technical problems tied to automated tools.

Transparency mechanisms, including public inventories and privacy assessments, play a central role in debates over accountability because they document what systems exist, what they do, and where they operate.

Those mechanisms also matter for legal narratives around due process, especially when applicants and petitioners argue they cannot meaningfully respond to concerns they cannot see.

The U.S. inventory showed DHS AI use cases grew from approximately 146 in July 2025 to over 200 by February 2026, an expansion that arrived as executive-branch posture shifted in late 2025.

A Trump administration executive order in December 2025 took a deregulatory approach to AI, revoking several Biden-era AI safety and transparency requirements and signaling an intent to consolidate AI oversight at the federal level and reduce โ€œburdensomeโ€ state-level transparency mandates.

In Canada, critics have characterized Chinook as a high-throughput facilitation tool that standardizes how information appears to decision-makers, even when the underlying applications include long narratives and supporting context.

Lawyers say that when summaries and extracted fields dominate review, applicants can feel as if โ€œhuman reviewโ€ is nominal even when an officer signs the final decision.

In U.S.-style workflows, evidence classification and triage can influence what an officer reviews first and how quickly a case moves to the next step, including the issuance of Requests for Evidence.

Lawyers also report โ€œinstantโ€ RFEs, where automated systems trigger Requests for Evidence based on algorithmic flags, such as salary inconsistencies or cross-form data mismatches, before a human officer ever reviews the file.

An algorithmic flag in that setting can function as a risk indicator, a document anomaly signal, or a matching issue that prompts faster communication, even if the applicant believes the record is complete.

When an applicant cannot quickly reach an independent office to resolve delays or report suspected technical errors, the burden of sorting out a mistaken flag can grow, lawyers say, because filing deadlines and case clocks keep moving.

In the United States, lawyers and advocates have pointed to the dismantling of the CIS Ombudsman and CRCL as removing independent channels used to address long delays and technical breakdowns that may intersect with automated systems.

AIMICIโ€™s founders present their work as part of an international movement among legal professionals to monitor automated decision-making and press governments to explain how tools shape immigration outcomes.

Record-high application volumes add pressure to adopt tools that reduce handling time, and both USCIS and IRCC reported record-high volumes in late 2025, the data showed.

IRCC cited 2.1 million pending cases, adding to concerns that speed-driven processing can widen the gap between what applicants submit and what officers practically read.

Even when automated systems do not render a final decision, administrative-law questions can arise over reviewability and transparency, because assistance tools can shape the evidence record an officer relies on.

Public reporting instruments such as AI inventories and Privacy Impact Assessments serve as governance tools by describing system purpose, data flows, and intended safeguards, which can inform litigation and oversight inquiries.

Ombuds and civil-rights complaint functions typically interact with immigration stakeholders by receiving complaints, identifying patterns, and elevating systemic concerns inside the agency, creating a back channel that differs from standard case-status inquiries.

Cross-border relevance remains central to AIMICIโ€™s framing, as Canada and the United States face similar pressures from volume, speed, and public scrutiny, while operating under different legal regimes.

AIMICIโ€™s founders argue that without clear explanations of where automation appears in the pipeline, applicants and petitioners can struggle to understand refusals, respond to evidence requests, or challenge inconsistent reasoning.

In the United States, DHS points the public to official documentation that tracks the spread of AI programs and related compliance work, including the agencyโ€™s AI Use Case Inventory.

USCIS posts program statistics and annual reporting through its Reports and Studies page, which DHS and outside analysts use to frame workloads and backlogs that drive demand for automation.

DHS also publishes privacy documentation tied to specific systems, including a Privacy Impact Assessment (PIA) for ATLAS, offering a template for how automation gets formally described in federal paperwork.

For broader agency announcements and updates, DHS directs readers to its official newsroom, a hub that immigration lawyers say they now monitor alongside inventories and court records as AI adoption accelerates.

โ†’ In a NutshellVisaVerge.com

Lawyers Launch Aimici to Track Immigration AI as Use Cases Rise 37%

Lawyers Launch Aimici to Track Immigration AI as Use Cases Rise 37%

Immigration experts launched AIMICI to monitor the rapid expansion of AI in Canadian and U.S. immigration systems. While governments use automation to handle record-high application volumes, critics argue that a lack of transparency and recent cuts to oversight offices leave applicants unable to challenge algorithmic errors. This shift toward automated triage and classification risks prioritizing processing speed over due process and human review.

Robert Pyne

Robert Pyne, a Professional Writer at VisaVerge.com, brings a wealth of knowledge and a unique storytelling ability to the team. Specializing in long-form articles and in-depth analyses, Robert's writing offers comprehensive insights into various aspects of immigration and global travel. His work not only informs but also engages readers, providing them with a deeper understanding of the topics that matter most in the world of travel and immigration.

Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments