
Chatbot hallucinations this month have attorneys scrambling
Recent incidents involving U.S. lawyers relying improperly on ChatGPT underscore critical risks associated with generative AI in legal practice. In May 2025 alone, three cases drew significant judicial criticism after attorneys submitted briefs filled with false citations and legal inaccuracies, generated by unverified AI outputs.
In Alabama, law firm Butler Snow faced potential sanctions after submitting filings riddled with nonexistent citations in defense of the state's prison system. The firm acknowledged misuse of AI, promising enhanced training on responsible AI practices. Similarly, attorneys in Tampa faced judicial rebuke for submitting an AI-assisted motion with multiple legal inaccuracies. Judge Kathryn Mizelle struck the brief from the record, emphasizing that human oversight remains indispensable.
Another case resulted in sanctions against firms Ellis George and K&L Gates, fined $31,000 after repeatedly filing AI-generated briefs containing false authorities. Judge Wilner condemned their actions as reckless, noting the failure to disclose and verify AI-generated content.
These incidents collectively highlight a growing challenge: generative AI’s seductive convenience must be balanced with rigorous verification. Lawyers must prioritize transparency and diligent checks when integrating AI into their workflow. Accountability and accuracy remain non-negotiable.
​

Author: Emma Ray - emma.ray@themastersconference.com