For two hours, attorney Steven Schwartz had to justify the use of ChatGPT before a judge on Thursday. AI found the fake cases he used in a petition.
No, ChatGPT is not a “super search engine”. For two hours this Thursday, June 8, Steven Schwartz was arraigned by a judge in the Southern District of New York. The New York Times. The lawyer used an OpenAI conversational robot as part of his legal research.
Steven Schwartz called ChatGPT to defend his client who was hit by a service trolley during a flight. The objective was to find out whether such cases have already been decided in the past. Except the software told him about cases he never had. So it is alleged that he did not verify the information provided by ChatGPT.
“My God, I regret not doing this,” he said, indicating that he felt embarrassed and ashamed.
Implications for the entire legal profession
According to him, “he didn’t understand that ChatGBT could generate business”. The lawyer further explains, “I heard about this new site and I mistakenly assumed it was a super search engine.”
Nearly 70 people attended Steven Schwartz’s trial. On the benches, lawyers, law students, court officials and professors attend the scene. “This case has ramifications for the entire legal profession,” court columnist David Ladd told the New York Times.
Because the lawyer should be banned. At least that’s what Judge Kevin Castel thinks. The consequences could affect not only Steven Schwartz, but also his partner, Peter Lotuka, who recovered the case.
Although he did not conduct the research, Peter Loduka did not thank ChatGPT for the (fake) cases presented by his colleague. He admitted that he did not confirm whether these files existed.