It was crazy enough to hear that lawyers were dumb enough to cite cases pulled from ChatGPT without ever reading them.

Things only get wilder in the lawyers’ defense of sanctions.

They blamed ChatGPT for tricking them into including fictitious legal research in a brief supporting a personal injury claim against Avianca Airlines.

Tricked. Really?

Not only did the cases not exist, but airlines referenced in the cases cited didn’t even exist.

From Larry Neumeister and the Associated Press:

“Schwartz told Judge P. Kevin Castel he was “operating under a misconception … that this website was obtaining these cases from some source I did not have access to.”

Website?

He said he “failed miserably” at doing follow-up research to ensure the citations were correct.

“I did not comprehend that ChatGPT could fabricate cases.””

The judge, considering sanctions against the lawyers and a law firm, confronted Schwartz with one of the cases initially described as a wrongful death claim against an airline ultimately to be about a man who missed a flight.

“Can we agree that’s legal gibberish?” the Judge asked.

Another lawyer subject to sanctions said he trusted Schwartz and didn’t read the cases.

Confronted with portions of the case that were just gibberish, the lawyer said “It never dawned on me that this was a bogus case.”

Things only get crazier with the “lawyers don’t understand technology” defense.

An attorney for the law firm, told the judge,

“[L]awyers have historically had a hard time with technology, particularly new technology, “and it’s not getting easier.”

“Mr. Schwartz, someone who barely does federal research, chose to use this new technology. He thought he was dealing with a standard search engine. What he was doing was playing with live ammo.”

Technology and ChatGPT are not the problem here. Lawyer stupidity is.

Sanctions to be determined.