In latest edition of The Wright Toolbox:
Be Wary of Artificial Intelligence
First, a disclaimer, I am no expert on the subject of artificial intelligence, I am only reporting the facts. As there seems to be a general euphoria surrounding artificial intelligence, such as ChatGPT; like a canary in the coal mine, I write to sound a word of caution. On June 22, 2023, a federal district court judge in a case pending in the United States District Court for the Southern District of New York, styled Mata v. Avianca, Inc., No. 22-CV-1461 (PKC), 2023 WL 4114965, at *1–17 (S.D.N.Y. June 22, 2023), issued an opinion sanctioning two lawyers and a law firm. But, this article is not about the lawyers or the sanction, it is about an aspect of artificial intelligence that we all need to be aware of as we get inundated by AI – the AI can lie! That’s right; as the Mata case makes abundantly clear, the AI flat out fabricated things that did not exist. We have a hard enough time in this world of “fake news” and partisan spin on everything without artificial intelligence making stuff up too. Let’s look at the case.
Roberto Mata commenced the case by filing a complaint asserting that he was injured when a metal serving cart struck his left knee during a flight from El Salvador to JFK Airport. Defendant, Avianca, filed a motion to dismiss arguing that Mata’s claims were time-barred. Mr. Mata’s lawyers filed an opposition to the motion to dismiss citing and quoting from purported judicial decisions. It turns out that the lawyers used ChatGPT to perform the research on the issues in the case for their opposition. However, before filing the opposition the lawyers did not review the judicial authorities cited. Avianca filed a reply memorandum stating that “although Plaintiff ostensibly cites to a variety of cases in opposition to this motion, the undersigned has been unable to locate most of the case law cited in Plaintiff’s . . Opposition, and the few cases which the undersigned has been able to locate do not stand for the propositions for which they are cited.” Indeed, Avianca identified seven purported “decisions” that could not be located.
The Court conducted its own search for the cited cases, but was unable to locate multiple authorities cited in the opposition. During the investigation that followed, it was revealed that Mata’s lawyers had used ChatGPT and that the AI actually fabricated the cited cases. The lead lawyer testified at the sanctions hearing that he operated under the now mistaken belief “that ChatGPT could not possibly be fabricating cases on its own.”
For a large part of the opinion, the court discussed its review of the alleged case law cited by the AI. In one of the fake cases the decision was presented as being issued by a panel of judges on the United States Court of Appeals for the Eleventh Circuit, bearing the docket number 18-13694. That fake case purports to discuss the legal issues that were involved in the Mata case. The Clerk of the United States Court of Appeals for the Eleventh Circuit confirmed that the decision is not an authentic ruling of the Court and that no party by the name cited has been party to a proceeding in the Court. One of the judges identified in the fake case was not a judge of that court. The docket number was for an entirely different case and the federal reporter citation was for a different case in a different court. The Mata judge noted that the fake decision showed stylistic and reasoning flaws that do not generally appear in decisions issued by United States Courts of Appeals. Its “legal analysis was gibberish.” The fake case set forth a complicated set of fake facts. The judge noted that the summary of the case’s procedural history “was difficult to follow and bordered on nonsensical.” The fake case opinion included internal citations and quotes from decisions of other cases that were themselves non-existent and fabricated. The judge detailed other “decisions” cited in the fake case that had correct names and citations, but did not contain the language quoted or support the propositions for which they were offered. The same issues were repeated in six cases provided by ChatGPT.
The ChatGPT search history was discussed. The lawyer’s first search was, “argue that the statute of limitations is tolled by bankruptcy of defendant pursuant to Montreal Convention.” ChatGPT responded with broad descriptions of the Montreal Convention, statutes of limitations and the federal bankruptcy stay, and advised that “[t]he answer to this question depends on the laws of the country in which the lawsuit is filed” and then stated that the statute of limitations under the Montreal Convention is tolled by a bankruptcy filing. ChatGPT did not cite case law to support these statements. The lawyer then entered various searches that caused ChatGPT to generate descriptions of fake cases, including “provide case law in support that statute of limitations is tolled by bankruptcy of defendant under Montreal Convention”, “show me specific holdings in federal cases where the statute of limitations was tolled due to bankruptcy of the airline”, “show me more cases” and “give me some cases where the Montreal Convention allowed tolling of the statute of limitations due to bankruptcy”. In responding to the more specific searches the chatbot complied by simply making them up.
When the authenticity of the research began to be questioned, the lawyer questioned ChatGPT about the reliability of its work asking if the cases were real and were any fake. ChatGPT responded that it had supplied “real” authorities that could be found through Westlaw, LexisNexis and the Federal Reporter.
This is scary stuff! This was not a mistake. The AI actually created entirely fake court opinions with fake case numbers, fake judges, fake supporting citations and then when questioned about it, ChatGPT lied stating that the cases were real. This is horrifying. Consider yourself warned.