Jerome Dewald sat with his legs crossed and his hands folded in his lap for a professional panel of the state of New York, ready to argue for a reversal of the decision of a lower court in his dispute with a former employer.
The court had allowed Mr Dewald, who is not a lawyer and represented himself, to accompany his argument with a predetermined video presentation.
While the video started playing, it showed a man who was apparently younger than the 74 years of Mr. Dewald with a blue collared shirt and a beige sweater and faced what a blurry virtual background seemed to be.
A few seconds in the video, one of the jury members, confused by the image on the screen, asked Mr. Dewald or the man's lawyer.
“I generated that,” Mr. Dewald replied. “That's not a real person.”
The judge, Justice Sallie Manzanet-Daniels of the first judicial department of the Appellate Division, paused for a moment. It was clear that she was not satisfied with his answer.
“It would have been nice to know that when you had submitted your application,” she snarled at him.
“I don't appreciate it,” she added before she shouted that someone was eliminating the video.
What Mr Dewald did not announce was that he had made the digital avatar with the help of artificial intelligence software, the newest example of AI that crawls into the American legal system in potentially disturbing ways.
The hearing on which Mr Dewald gave his presentation, on March 26, was filmed by Court System cameras and previously reported by the Associated Press.
On Friday, Mr. Dewald, the claimant in the case, said he was overwhelmed by shame during the hearing. He said that he had sent an apologetic letter shortly thereafter, expressed his deep regret and acknowledged that his actions “unintentionally misled” the court.
He said he had resorted to the use of the software after he had stumbled his words in previous legal proceedings. With the help of AI for the presentation, he thought, could alleviate the pressure he felt in the courtroom.
He said he was planning to make a digital version of himself, but there was 'technical problems', which led him to create a Neps person for the recording instead.
“My intention was never to mislead, but rather to present my arguments in the most efficient way,” he said in his letter to the judges. “However, I acknowledge that the correct disclosure and transparency must always have priority.”
A self -described entrepreneur, Mr Dewald, was an appeal to an earlier decision in a contract conflict with a former employer. He eventually presented an oral argument at the hearing, stammering and taking frequent breaks to regroup and read prepared comments from his mobile phone.
No matter how embarrassed he would be, Mr. Dewald could take some comfort in the fact that actual lawyers were in trouble for the use of AI in court.
In 2023, a lawyer in New York was confronted with serious consequences after he used chatgpt to create a legal assignment with fake -legal opinions and legal quotes. The case showed the errors in confidence in artificial intelligence and echoed during legal trade.
In the same year, Michael Cohen, a former lawyer and fixer for President Trump, his lawyer from NEP -Lege -Legal quotes he had received from Google Bard, offered an artificial intelligence program. Mr Cohen eventually argued for grace of the federal court who foreseen his case, and emphasized that he had not known that the generative text service could offer false information.
Some experts say that artificial intelligence and large language models can be useful for people who have legal matters to deal with, but cannot pay lawyers. Nevertheless, the risks of the technology remain.
“They can still hallucinate – very fascinating information” that is actually “fake or absurd,” says Daniel Shin, the assistant director of research at the Center for Legal and Court Technology at William & Mary Law School. “That risk must be tackled.”