Skip to content

Lawyers are having a really bad day in court after citing fake cases fabricated by ChatGPT

    A robot hand places spelling blocks

    Getty Images | style photography

    A federal judge filed a lawsuit and fined the plaintiff’s lawyers $5,000 after they used ChatGPT to examine court documents citing six bogus cases invented by OpenAI’s artificial intelligence tool.

    Lawyers Steven Schwartz and Peter LoDuca of the firm of Levidow, Levidow, & Oberman “breached their responsibilities when they filed non-existent court opinions with false quotes and quotes created by the artificial intelligence tool ChatGPT, then stood by the false opinions following court rulings. orders called into question their existence,” U.S. District Judge Kevin Castel wrote in an order issued yesterday. The attorneys, Castel wrote, “argued for the phony cases and legal arguments” even “after being swayed by their opponents were informed that their quotes did not exist and could not be found.”

    The judge imposed a fine of $5,000 to be paid by the two attorneys and their firm under joint and several liability. What’s even more embarrassing for the lawyers is having to send letters to six real judges who have been “misidentified as the author of the false” opinions cited in their legal papers. Castel described the legal analysis in one of the bogus cases as “gibberish”.

    “The court will require defendants to notify their client and the judges whose names have been wrongly invoked of the sanctions imposed,” Castel wrote. “The Court will not demand an apology from respondents because a forced apology is not a sincere apology. Any decision to apologize is left to the respondents.”

    Filing false opinions in court harms the lawyer’s client, wastes the court’s time, forces the opposing party to “waste time and money exposing the fraud” and causes “potential damage to the reputation of judges and courts whose names are falsely invoked as authors of the false opinions and reputation of a party attributed to fictitious conduct,” Castel wrote. “It promotes cynicism about the legal profession and the American legal system. And a prospective litigant may be tempted to defy a court ruling by falsely claiming that there is doubt about its authenticity.”

    Case rejected

    As we wrote last month, Schwartz admitted to using ChatGPT for research and failed to verify that the AI ​​chatbot’s “legal opinions” were accurate. Schwartz wrote in an affidavit that he had “never used ChatGPT as a source for legal research prior to this occurrence and was therefore unaware of the possibility that its contents may be inaccurate.”

    The real case, Roberto Mata vs Avianca, was originally filed in a state court in New York, but was moved to the United States District Court for the Southern District of New York. Schwartz represented Mata in state court, but was not allowed to practice in federal court. Schwartz continued to write the legal briefs, and LoDuca filed them under his own name.

    Mata claimed damages for injuries suffered on an Avianca flight from El Salvador to New York in August 2019, when a metal snack and drink cart hit his knee. Mata’s lawyers used ChatGPT’s bogus quotes to argue that the case should be referred back to New York State Court, where a three-year statute of limitations would apply.

    Unsurprisingly, their argument about false business was not convincing in court. In addition to punishing the lawyers, Castel granted Avianca’s motion to dismiss the case yesterday. The judge agreed with the defendant that a two-year statute of limitations under the Montreal Convention applies and that the plaintiff’s lawsuit was overdue.

    “I just never thought it could be made up”

    The dispute over false precedents played out over a few months. On March 1, Mata’s lawyers cited the bogus cases in a preliminary injunction opposing Avianca’s motion to dismiss the case.

    But if the case had ended when the defendants openly disclosed their actions shortly after they received the defendant’s letter of March 15 questioning the existence of the cases, or after they rejected the orders of the Court of 11 and April 12th requiring the submission of the cases, the record would now look very different,” Castel wrote. “Instead, individual respondents doubled down and only began to trickle out the truth on May 25, after the court issued an order to show why one of the individual respondents should not be punished.”

    Castel found the lawyers guilty of “bad faith” and “acts of deliberate avoidance and false and misleading statements to the court”. While Schwartz wrote the fake legal documents, LoDuca did not check them for accuracy.

    “Mr. LoDuca simply relied on the belief that the work of Mr. Schwartz, a colleague of more than twenty-five years, would be reliable,” Castel wrote. But Schwartz’s practice was exclusively for state court. The lawyers admitted in a bill of law that Schwartz was trying to “investigate a federal bankruptcy case with which he was completely unfamiliar.”

    At a June 8 hearing on possible sanctions, Schwartz testified that he “operated under the false perception that this website [ChatGPT] couldn’t possibly make things up on my own.” Schwartz stated, “I just didn’t think the case could be made up, so I didn’t look at it from that point of view… My response was: ChatGPT finds that case somewhere. Maybe it wasn’t published. Perhaps there has been an objection. Maybe access is difficult to get. I just never thought it could be made up.”

    The Levidow firm had no Westlaw or LexisNexis accounts, instead using a Fastcase account that had limited access to federal cases. Schwartz testified that he “heard about this new site that I assumed – I wrongly assumed it was a super search engine called ChatGPT, which is what I was using.”