Home >Technology peripherals >AI >Lawyer used ChatGPT to litigate, but was deceived into citing non-existent cases
News on May 28, according to the New York Times, a lawyer recently relied on the chatbot ChatGPT to conduct legal research (legal research) in a court case, resulting in the submission of wrong information. The incident sheds light on the potential risks of artificial intelligence in the legal field, including misinformation.
In the case involving a man suing an airline for personal injuries, the plaintiff's legal team filed a brief filing citing several previous courts. cases to support their arguments in an attempt to establish a legal precedent for their claims. However, the airline's lawyers discovered that some of the cases cited did not exist and immediately notified the presiding judge.
Judge Kevin Castell expressed surprise at the situation, calling it "unprecedented" and ordered an explanation from the plaintiffs' legal team.
Steven Schwartz, one of the plaintiffs’ attorneys, admitted that he used ChatGPT to search for similar legal precedents. In a written statement, Schwartz expressed deep regret, saying: “I have never used ChatGPT to search for similar legal precedents. has not used artificial intelligence to conduct legal document searches and does not know that its content may be false."
The document submitted to the court is accompanied by a screenshot of a conversation between Schwartz and ChatGPT, in which Shi Watts asked whether one particular case: Varghese v. China Southern Airlines Co Ltd was true. ChatGPT replied that it was true and said the case could be found in legal reference databases such as LexisNexis and Westlaw. However, subsequent investigation revealed that the case did not exist, and further investigation revealed that ChatGPT had fabricated six non-existent cases.
In light of this incident, the two attorneys involved in the case, Peter Loduca and Steven Schwartz of Levidow, Levidow & Oberman, will attend a disciplinary hearing on June 8 Yes, explain their behavior. IT House notes that this incident has triggered discussions in the legal community on the appropriate use of artificial intelligence tools in legal research, as well as the need to formulate comprehensive guidelines to prevent similar situations from happening.
The above is the detailed content of Lawyer used ChatGPT to litigate, but was deceived into citing non-existent cases. For more information, please follow other related articles on the PHP Chinese website!