Ex-Trump fixer Michael Cohen didn't realize he used AI tool to generate bogus legal decisions: lawyer – New York Daily News

A chronicle of Donald Trump's Crimes or Allegations

Ex-Trump fixer Michael Cohen didn't realize he used AI tool to generate bogus legal decisions: lawyer – New York Daily News

Daily News e-Edition
Evening e-Edition
Sign up for email newsletters

Sign up for email newsletters
Daily News e-Edition
Evening e-Edition
Trending:
Former Donald Trump attorney Michael Cohen used Google’s Bard AI to generate bogus legal filings in a bid to end his court-mandated supervised release, his lawyer admitted in a federal court filing.
Cohen is currently on supervised release after serving half of a three-year sentence following his 2018 guilty plea for arranging hush money payments to porn star Stormy Daniels during Trump’s 2016 presidential run.
Cohen’s attorney Danya Perry wrote that her client thought Google Bard was a suped-up search engine and not a Chat GPT-like artificial intelligence tool when he used the service to generate three phony decisions to support his argument for the court terminating his post-release supervision, according to attorney Danya Perry.
“This is a simple story of a client making a well-intentioned but poorly-informed suggestion,” Perry wrote in Thursday’s letter to Judge Jesse M. Furman.
Cohen’s lawyers filed the motion asking Furman to relieve their client of their court’s supervision in November, but the eagle-eyed judge found notable inconsistencies in the three decisions cited as support for their argument.
One was an excerpt taken from a Fourth Circuit decision that had nothing to do with supervised release, while another quoted from a decision by the Board of Veterans’ Appeals, an administrative tribunal that does not rule on criminal matters, the judge wrote.
The third case, according to Furman’s order, “appears to correspond to nothing at all.”
As a non-practicing lawyer, Cohen, who was disbarred following his 2018 guilty plea, can’t be held responsible for his lack of judgment in using AI to do his legal research for him, Perry told Judge Furman in Thursday’s letter.
“Mr. Cohen is not a practicing attorney and has no concept of the risks of using AI services for legal research — nor does he have an ethical obligation to verify the accuracy of his research,” wrote Perry.
She instead blamed the legal faux pas on attorney David Schwartz, an attorney for Cohen who submitted the bogus legal decisions in November, saying it was his job to vet his client’s legal work.
“Mr. Schwartz… did have an obligation to verify the legal representations being made in a motion he filed,” Perry wrote the judge. “Unfortunately, Mr. Schwartz did not fulfill that obligation.”
Cohen’s shoddy use of AI is reminiscent of the legal chicanery detected by federal Judge Kevin Castel in June, when he fined two New York lawyers $5,000 for using ChatGPT to generate false cases supporting his client’s arguments against Colombian airline Avianca.
Michael Cohen declined to comment, referring all questions to Perry.
Perry told the Daily News that her client has nothing to hide, as evidenced by his willingness to unseal the AI-generated filings.
“These filings — and the fact that he was willing to unseal them — show that Mr. Cohen did absolutely nothing wrong,” said Perry.
Copyright © 2024 New York Daily News

source