In a surprising turn of events, Michael Cohen, the former lawyer of Donald Trump and a key witness in Trump’s upcoming criminal trials, has confessed to mistakenly relying on artificial intelligence (AI) generated legal citations from Google Bard.
The revelation comes as Cohen is set to testify against Trump, shedding light on the potential pitfalls of using AI in the legal profession.
Misunderstanding leads to AI reliance
In a recent court filing, Michael Cohen acknowledged that he had provided his lawyer, David Schwartz, with legal citations generated by Google Bard in support of his case. Not being a practicing attorney, Cohen reportedly misunderstood Google Bard as a supercharged search engine rather than a generative AI service. As a result, he unwittingly included inaccurate citations in official court documents.
“The invalid citations at issue—and many others that Mr. Cohen found but were not used in the motion—were produced by Google Bard, which Mr. Cohen misunderstood to be a supercharged search engine, not a generative AI service like Chat-GPT,”
The court filing stated.
Cohen’s defense argued that he lacked the legal expertise to discern the accuracy of AI-generated citations and emphasized that he had no ethical obligation to verify the research. Instead, they blamed Cohen’s lawyer, David Schwartz, for failing to validate the citations before including them in the legal motion.
AI pitfalls in legal research
This case isn’t the first instance of lawyers encountering issues with AI-generated legal content. Earlier this year, Steven Schwartz, an attorney at the New York law firm Levidow, Levidow & Oberman, faced criticism for relying on AI to create false court citations.
In that instance, Schwartz admitted to using AI, specifically ChatGPT, for legal research but claimed it was his first time doing so. However, the judge overseeing the case uncovered serious inaccuracies in the AI-generated content.
“Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and internal citations,”
The judge declared.
These incidents highlight the potential risks associated with incorporating AI into the legal research process, especially when individuals lack the necessary expertise to assess the accuracy of AI-generated information.
The call for caution in legal AI
The need for caution and due diligence becomes increasingly evident as AI advances and becomes more integrated into various industries, including law. Legal professionals and their clients must recognize the limitations of AI and understand that AI-generated content should be treated as a tool rather than a definitive source.
Experts in the legal field suggest that lawyers should undergo training to understand AI’s capabilities and shortcomings, enabling them to make informed decisions about its use. Additionally, a system of checks and balances within law firms should be established to review and validate AI-generated content before it is presented in court or included in legal documents.