
America’s largest personal injury law firm, Morgan & Morgan, hit with $5,000 sanctions for submitting court filings riddled with fake AI-generated cases—exposing how unchecked technology now threatens the integrity of our justice system.
Story Highlights
- Morgan & Morgan attorneys sanctioned $5,000 in Wyoming federal court for eight fabricated cases created by AI hallucinations.
- Trend explodes since 2023 Mata v. Avianca case, with Thomson Reuters identifying 22 incidents in just five weeks during summer 2025.
- Courts impose escalating penalties, including fines, disqualifications, and suspensions, to protect precedent-based justice from AI deception.
- Yale researchers track hundreds of cases, warning that fake citations erode public trust in an already strained legal system.
- Judges demand human verification, rejecting tech shortcuts that prioritize speed over truth in high-stakes proceedings.
The Landmark Mata v. Avianca Precedent
In May 2023, New York attorney Steven A. Schwartz of Levidow, Levidow & Oberman used ChatGPT to research a personal injury suit against Avianca Airlines. He filed a federal brief in June citing six nonexistent cases, complete with realistic names, quotes, and details like “Varghese v. China Southern Airlines.” Judge P. Kevin Castel of the Southern District of New York ordered verification after opposing counsel failed to locate them. A July hearing confirmed the fabrications. On June 22, 2023, Schwartz admitted relying on ChatGPT, resulting in a $5,000 sanction against him and his firm for neglecting to verify the AI output. This marked the first widely publicized U.S. case of AI hallucinations in a court filing by a practicing attorney.
The incident ignited national media coverage and prompted Chief Justice John Roberts to address AI risks in his 2023 judiciary report. AI hallucinations arise from large language models trained on vast data without true comprehension, producing plausible but false outputs known as confabulation. Courts rely on precedent; fabricated citations mislead judges and undermine trust. Post-ChatGPT launch in November 2022, adoption surged amid lawyer shortages, but verification failures exposed ethical lapses.
Escalating Cases Hit Major Firms
By 2025, the problem proliferated. In February 2025, Judge Kelly H. Rankin in the District of Wyoming sanctioned Morgan & Morgan, America’s largest personal injury firm, with $5,000 in total fines—one attorney paid $3,000 and faced removal from the case—for submitting eight fake AI-generated cases. Thomson Reuters Westlaw’s study from June 30 to August 1, 2025, uncovered 22 similar incidents, many leading to sanctions. Courts grew harsher, noting AI errors remain easily avoidable through basic checks. Pre-2023 saw no major cases; now Yale’s Matthew Dahl tracks hundreds.
Other examples include a Massachusetts Superior Court fining an attorney $2,000 for AI fakes around February 2024. Power rests with judges imposing fines, disqualifications, and suspensions—like Colorado’s 90-day penalty for denying AI use. Firms like Morgan & Morgan face reputational damage and higher scrutiny as BigLaw prioritizes billable speed over accuracy. Bar associations issue guidelines, but lead counsel bear verification duty.
Implications for Justice and Accountability
Short-term consequences include fines from $1,000 to over $5,000, attorney disqualifications, and suspensions. Long-term, fake citations risk propagating into judicial orders, eroding case law reliability. Clients suffer case losses; courts burden themselves verifying briefs amid rising caseloads. Economically, firms invest in training and policies to mitigate sanctions. Socially, public faith in justice wanes as technology—pushed by elites for efficiency—betrays foundational principles of truth and accountability.
[Eugene Volokh] AI Hallucinations in Filing by a Top Law Firm https://t.co/ESl5AzHl62
— Volokh Conspiracy (@VolokhC) April 21, 2026
Politically, incidents spur judicial reports and state bar rules on AI disclosure, with over 100 cases in databases like Damien Charlotin’s. Experts agree: human oversight remains essential. Stanford HAI found legal models hallucinate on one in six queries; Thomson Reuters calls it a plague on filings. Optimists see AI potential with verification; pessimists fear integrity loss. Courts affirm attorneys’ unchanged duty to verify, echoing conservative calls for limited reliance on unproven tech over individual responsibility.
Sources:
Thomson Reuters: GenAI Hallucinations in Legal Filings
Stanford HAI: AI Trial – Legal Models Hallucinate 1 Out of 6 or More Benchmarking Queries
Klemchuk: AI Hallucinations in Court Filings
Cronkite News: Lawyers, AI Hallucinations and ChatGPT
NCSC: Legal Practitioners Guide to AI Hallucinations
Damien Charlotin: Hallucinations Database
Stern Kessler: AI/IP Year in Review – AI Hallucinations in Court Filings and Orders

















