You’ve probably heard about the case of ChatGPT providing a New York lawyer a number of cases the lawyer used in a brief, cases that no one could find, including the judge.
That was because ChatGPT had invented everything.
Lawyers doing dumb things is routine – as sad as it is to say.
As Carolyn Elefant shared on LinkedIn,
“Folks, this stuff happens all the time in law practice. A quick search of a phrase like “failed to Shepardize” (in layman terms, ensure that cases haven’t been overruled) turns up at least a dozen court cases including one for Rule 11 sanctions.
The bottom line is this. Lawyers cut corners. Some do it because they’re lazy and some because they want to pull a fast one and some because they can’t afford the research tools that would help them avoid the problem.
The ChatGPT story isn’t an indictment of AI tools. It’s an indictment of bad lawyering.
But sadly, after this incident, we’ll see courts and ethics regulators banning lawyers from using ChatGPT or discouraging use of other AI tools because of a single incompetent lawyer.”
Elefant’s spot on. I am already seeing legal professionals use this case of lawyer stupidity as reason why AI should not be used in the legal profession. My lawyer mentioned the case in a phone call this morning.
This when AI is being used by lawyers to perform high quality work in a fraction, ultimately meaning lower costs to clients.
If a lawyer wants to search for case law, code or regulations or possible arguments to use in a brief via ChatGPT, have at it as a way save time and garner ideas.
Just be smart enough to check that the info you are getting is correct.
Lawyers may be able use legal research products that have deployed AI. The data base would be limited to the law and be more reliable, though I am not sure how far those tools have come. Even then any sane lawyer would read the cases found.
Dumb lawyers is no reason AI should be prohibited in the practice of law.