Use AI to support your work—not to replace your professional judgment
As Artificial Intelligence tools (such as ChatGPT, Harvey, CoPilot, Claude, etc.) become more accessible, many attorneys have begun exploring how to best use AI tools in their practices. While such tools can streamline workflows, there are many recent examples showing that AI is not a substitute for professional judgment and its misuse carries real consequences, including:
- In late 2025, a Cook County, Illinois judge sanctioned an attorney and her former law firm a combined $59,500 after a court filing included a fake case generated by ChatGPT. The court found not only improper reliance on AI, but also misrepresentations about how the erroneous citation made it into the brief and failures to promptly correct the issue. See https://chicago.suntimes.com/the-watchdogs/2025/12/09/goldberg-segalla-law-firm-cha-sanctioned-60-000-ai-chatgpt-lead-paint-court-case.
- A pro se plaintiff used AI tools to generate 21 different motions, at times filing nearly one per day, overwhelming defense counsel and driving up litigation costs. Attorneys involved in the case described polished, but AI-generated filings that demanded extensive responses and often contained subtle inaccuracies. See https://www.law360.com/legalindustry/articles/2424818/his-client-got-a-pro-se-suit-then-the-ai-filings-started-.
- In early 2026, a federal judge in Kansas confronted a patent attorney who had unknowingly incorporated multiple false citations produced by ChatGPT into his briefing, ordering an explanation as to why sanctions should not be imposed. The attorney acknowledged relying on AI without checking the cited cases and expressed regret for failing to verify the information before submission. See https://www.abajournal.com/news/article/judge-orders-patent-attorneys-to-explain-ai-hallucinated-citations.
The response from the Bench has been clear: the duty of candor, Rule 11 obligations, and responsibility for accuracy do not change simply because AI tools are used. A magistrate judge in the Eastern District of Oklahoma recently updated his AI guidelines to enforce the fact that technology is not the issue; “truth” is. Magistrate Judge Jason A. Robertson emphasizes his requirement to comply with Rule 11 and states that “human verification” of all cited authority is required. See https://www.oked.uscourts.gov/sites/oked/files/AI%20Guidelines%20JAR%201.06.26.pdf.
As attorneys, we must stay vigilant.
- Verify every citation and quotation manually, especially if AI assisted in locating them. AI does not lessen an attorney’s responsibility to ensure accuracy and truthfulness.
- Remember that responsibility is non-delegable. Courts evaluate the product of your signature, not the tools behind it. Be sure to check the Court’s (and the specific Judge’s) rules on candor and AI use.
- Implement internal safeguards, such as assigning someone to double-check all writing and citations and/or requiring AI training and certification for all attorneys and staff.
Generative AI is here to stay, and its benefits are undeniable. But these tools are only as reliable as the scrutiny applied to their output. Used wisely, AI can support our craft. Used carelessly, it can threaten our credibility, our clients’ interests, and the integrity of the judicial process.