'Like looking at a car wreck': NY attorney appears before judge for citing AI-generated 'cases' in legal brief

'Like looking at a car wreck': NY attorney appears before judge for citing AI-generated 'cases' in legal brief
NYC Criminal Court Judge Presides Over Court Cases Via Tele Conference During Coronavirus Pandemic
Bank

In May, attorney Steven A. Schwartz (of the New York City firm Levidow, Levidow & Oberman) submitted a 10-page legal brief on behalf of a client who is suing an airline. The brief, according to the New York Times, cited more than half a dozen "decisions" that were "relevant to" the case.

But none of the "cases" cited in the brief were real. All of them were artificially generated by the chatbot ChatGPT. And on Thursday, June 8, Schwartz is scheduled to appear before a Manhattan judge and face "possible sanctions."

In a declaration filed on Tuesday, June 6, Schwartz explained, "I simply had no idea that ChatGPT was capable of fabricating entire case citations or judicial opinions, especially in a manner that appeared authentic…. This has been deeply embarrassing on both a personal and professional level, as these articles will be available for years to come."

READ MORE: Experts demand 'pause' to artificial intelligence until regulations are imposed

Schwartz's client in the case is Roberto Mata, who sued Avianca Airlines and alleged that he was injured when a metal serving cart hit his knee during a flight from El Salvador to New York City. Avianca's lawyers asked a judge to throw the case out, and when Mata's attorneys objected, they presented the problematic ten-page brief to show why they believed that Roberto Mata v. Avianca Inc. should proceed.

But the "cases" cited in the brief — such as Martinez v. Delta Air Lines and Zicherman v. Korean Air Lines — were not actual cases. They had been artificially generated by ChatGPT.

Schwartz's lawyers have asked Judge P. Kevin Castel not to impose sanctions on either Schwartz or Levidow, Levidow & Oberman.

The attorneys argued, "Sanctions would serve no useful purpose. Mr. Schwartz and the firm have already become the poster children for the perils of dabbling with new technology; their lesson has been learned."

READ MORE: Democracy is unprepared for the 'AI deluge' and a 'tsunami' of 'automated disinformation': report

Legal expert David Lat told the New York Times, "This case has reverberated throughout the entire legal profession. It is a little bit like looking at a car wreck. But it is also a valuable object lesson about the perils of relying upon (artificial intelligence) tools in legal practice."

READ MORE: 'We know how this ends:' Twitter and '60 Minutes' host spooked by AI's 'mysterious' capabilities

The New York Times' full reports can be found at this link and here (subscription required).

{{ post.roar_specific_data.api_data.analytics }}
@2025 - AlterNet Media Inc. All Rights Reserved. - "Poynter" fonts provided by fontsempire.com.