AI Developments and Cautionary Tales for the Legal Field
By David A. Gauntlett*
Introduction
Discussion of AI is pervasive in virtually every profession, and the legal field is no exception. A recent federal case from the Northern District of California confirmed the legality of a common AI training method, meaning the development of AI is not about to slow down any time soon. Unless a new law is passed, it seems AI is in the clear to continue its rapid advancement. That said, some uses of AI tools can already be viewed as risky or outright unacceptable under current laws.
Bartz v. Anthropic PBC
In Bartz v. Anthropic PBC,[1] the court was forced to assess the legality of a common AI training method. Defendant Anthropic PBC developed its AI (“Claude”) by feeding it the text million of books without permission from the authors. The plaintiffs were authors of some of the books used in that process.
In early development, Anthropic simply pirated the books it needed to train Claude.[2] Anthropic conceded that this batch of pirated books included some from each of the plaintiffs.[3] It argued that “pirating initial copies of Authors’ books and millions of other books was justified because all those copies were at least reasonably necessary for training LLM.”[4] Unsurprisingly, the court determined that this method of acquisition was unacceptable.[5]
Anthropic did not, however, train Claude solely on books obtained through piracy. It also legally purchased books in bulk, scanned the pages, and used the content for training. The court ruled that Anthropic’s use of books it legally obtained was well within the fair use exception of copyright protection.
[T]he purpose and character of using copyrighted works to train LLMs to generate new text was quintessentially transformative. Like any reader aspiring to be a writer, Anthropic's LLMs trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different. If this training process reasonably required making copies within the LLM or otherwise, those copies were engaged in a transformative use.[6]
Anthropic’s method for digitizing the books was critical to the court’s determination. In the process of digitizing them, Anthropic destroyed the books. It also made sure that the digital copies were not distributed. Absent those factors, the court’s rationale suggests the practice would have been illegal.[7] But so long as that practice is followed, the court suggested that such use of legitimately purchased books is essentially the same as keeping a library of purchased physical books, “but with more favorable storage and searchability properties.”[8]
Existing TCPA Provisions Limit AI Use in Automated Marketing Calls
Businesses are more and more frequently employing AI-powered marketing tools to expand their reach and increase engagement. Synthetic voice calls, often using AI voice cloning technology, smart SMS campaigns, and machine-orchestrated client outreach are rapidly replacing the traditional cold calls of past days. The primary obstacle standing between this technology and consumers is the Telephone Consumer Protection Act (“TCPA”), a 1991 law designed to stop robocalls and fax spam.
The TCPA forbids “any telephone call to any residential telephone line using an artificial or prerecorded voice to deliver a message without the prior express consent of the called party.”[9] Recent class actions suggest an emerging trend: expanding the definition of what qualifies as an “artificial voice” or an “autodialer” to preclude the potentially invasive uses of AI. Courts have entertained arguments that AI-generated or cloned voices can trigger TCPA liability no matter how “natural” they may sound, but no definitive rulings have been made.
Given how quickly AI can churn through these outreach efforts, companies are understandably cautious in unleashing them until the legal landscape is more settled. Each violation—whether it be a missing opt-out, a failure to disclose voice synthesis, or a bad number—creates liability for statutory damages of $500 to $1,500. To reiterate, that amount is per violation, so a major class action lawsuit could end up costing tens or even hundreds of millions of dollars.
Lawyers Employing AI Are Still Responsible for All Outcomes
Attorneys are obligated to abide by various standards of care and competency. Sloppy use of AI is particularly dangerous in two situations.
First, as illustrated in Mata v. Avianca, Inc.,[10] AI cannot be relied upon as a substitute for legal research. There, the court sanctioned a lawyer for (1) submitting fabricated case citations using ChatGPT without confirming their existence and (2) subsequently “attempt[ing] to mitigate his actions by creating the false impression that he had done other, meaningful research on the issue and did not rely exclusive on an AI chatbot, when, in truth and in fact, it was the only source of his substantive arguments.”[11] The court determined that the attorney's reliance upon the AI system breached the duty of competence. Consequently, the court applied existing ethical rules to issue a $5,000 sanction “to advance the interests of deterrence and not as punishment or compensation.”[12]
Second, lawyers may violate the confidentiality requirement imposed by the ABA’s Rule 1.6 (and state bar equivalents) by uploading client details. Data retention policies vary from service to service. Depending on the policy, use of an AI tool may result in the retention of data that is shared with others or used for future training of models. The risk is greatly mitigated by the use of AI tools specifically designed for the legal profession (e.g., CoCounsel, Harvey, or Lexis+ AI) that will only preserve confidential data locally.
On a practical note, this is especially dangerous for lawyers who do not have Errors & Omission (“E&O”) coverage specifically tailored to covering AI use. Under a standard Professional Liability policy, which covers any “negligent act, error or omission” that occurs in the course of the insured’s “professional services,” the situations discussed above would arguably be covered. But smart money would bet on insurers doing everything in their power to stretch an exclusion to cover that conduct or even argue that AI use is not within the scope of a lawyer’s “professional services.” Litigation is particularly incentivized in the early development of AI and its use because any case could establish precedent that would set the tone for dozens of future cases.
While executing an E&O application, lawyers should also be careful of the language addressing the requirement to provide prior notice of any circumstance that may give rise to a claim in the future. Broad language, such as that used by Aspen Insurance, may include a provision requesting that every lawyer at a firm be consulted and inform the insurer of “any fact or circumstance, act, error, omission, personal injury or situation which he or she could reasonably be expected to give rise to claim.” Given the potential for mistakes from AI use, it may be safest for all attorneys employing such tools to disclose its use up front.
Conclusion
Due in large part to the Anthropic decision clearing the way for AI companies to continue with their preferred method of training their algorithms, AI is likely to become even more pervasive than it already is. While some older legislation like the TCPA is already reasonably well constructed to deal with the new use cases being introduced, even the best written laws will inevitably create gray areas when faced with new technology. It is also clear that use of that technology cannot be embraced without caution, particularly for those (like lawyers) who are legally and ethically obligated to maintain certain standards for their work.
*David A. Gauntlett is a principal of Gauntlett & Associates and represents policyholders in insurance coverage disputes regarding intellectual property, antitrust, and business tort claims, as well as in the underlying actions. Mr. Gauntlett can be reached at (949) 514-5662 or dag@gauntlettlaw.com. For more information, visit Gauntlett & Associates at www.gauntlettlaw.com.
[1] Bartz v. Anthropic PBC, No. C 24-05417 WHA, 2025 WL 1741691 (N.D. Cal. June 23, 2025).
[2] Id. at *2.
[3] Id.
[4] Id. at *5.
[5] Id. at *11 (“There is no decision holding or requiring that pirating a book that could have been bought at a bookstore was reasonably necessary to writing a book review, conducting research on facts in the book, or creating an LLM. Such piracy of otherwise available copies is inherently, irredeemably infringing even if the pirated copies are immediately used for the transformative use and immediately discarded.”)
[6] Id. at *8.
[7] Id. at *10 (“[E]very purchased print copy was copied in order to save storage space and to enable searchability as a digital copy. The print original was destroyed. One replaced the other. And, there is no evidence that the new, digital copy was shown, shared, or sold outside the company. This use was . . . clearly transformative . . . .”)
[8] Id.
[9] 47 U.S.C. § 227(b)(1)(B).
[10] Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. 2023).
[11] Id. at 457.
[12] Id. at 466.