- An attorney is facing backlash for the factual errors in the case document gotten from ChatGPT.
- Analysts discuss skepticism over the potential of AI replacing humans.
An attorney from New York, Steven Schwartz of Levidow, Levidow & Oberman law firm, has come under fire for incorporating ChatGPT, an AI language model, into his legal research while handling a lawsuit against Avianca Airlines. The case involves Robert Mata, who claims to have suffered an injury from a serving cart during a flight with the Colombian airline in 2019, as reported by CNN Business on May 28.
The attorney presented wrong documentations gotten from ChatGPT
The case took an unexpected turn when the judge overseeing the proceedings noticed inconsistencies and factual errors in the documentation provided. In an affidavit dated May 24, Schwartz admitted to using ChatGPT for his legal research and claimed he was unaware of the potential for false information within its content. He further stated that this was his first time utilizing the AI model for legal research.
The judge’s scrutiny revealed that several cases submitted by the attorney appeared to be fictitious, featuring fabricated quotes and inaccurate internal citations. Additionally, certain referenced cases were found to be non-existent, and there was an instance where a docket number was mistakenly associated with another court filing.
The attorney expressed regret for relying on the AI chatbot without conducting his due diligence. His affidavit emphasized that he greatly regrets using generative artificial intelligence in the research process and vows not to repeat the practice without absolute verification of authenticity.
Analysts discuss the potential of AI replacing humans
The integration of ChatGPT into professional workflows has sparked an ongoing debate regarding its suitability and reliability. While the intelligence of AI models like ChatGPT continues to advance, doubts remain about their ability to entirely replace human workers. Syed Ghazanfer, a blockchain developer, acknowledges the value of ChatGPT but questions its communication skills as a complete substitute for human interaction. He highlights that programming languages were created to address requirements that may not be effectively conveyed in native English.
As AI technology progresses, developers and professionals alike navigate the delicate balance between leveraging its capabilities and ensuring the accuracy and validity of the information it generates. The case involving Steven Schwartz and his use of ChatGPT serves as a cautionary tale, emphasizing the importance of human oversight and critical evaluation in legal research and other professional contexts.
While AI models like ChatGPT can assist in streamlining certain tasks, they should be used in conjunction with human expertise and thorough verification processes to minimize the risk of inaccuracies and false information. As the boundaries of AI capabilities expand, it remains essential for professionals to exercise prudence and discernment when incorporating such tools into their work.
Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.