How lawyers used ChatGPT and got into trouble

Zachariah Crabill was two years out of law school, burned out and nervous when his bosses added another case to his workload in May. He worked for hours writing a proposal until he had an idea: Maybe ChatGPT could help?

Within seconds, the AI ​​chatbot had completed the document. Crabill sent it to his boss for review and filed it in Colorado court.

“I was over the moon excited about just the headache that it saved me,” he told The Washington Post. But his relief was short-lived. While reviewing the brief, he realized to his horror that the AI ​​chatbot had created several fake court citations.

Crabill, 29, apologized to the judge and explained that he had used an AI chatbot. The judge reported him to a statewide office that handles attorney complaints, Crabill said. In July, he was fired from his law firm in Colorado Springs. In retrospect, Crabill wouldn’t use ChatGPT, but says it can be hard to resist for an overwhelmed novice lawyer.

“This is all so new to me,” he said. “I just had no idea what to do and no idea who to turn to.”

Business analysts and entrepreneurs have long predicted that the legal profession would be disrupted by automation. As a new generation of AI language tools sweeps the industry, that moment appears to have arrived.

Stressed-out lawyers turn to chatbots to write boring briefs. Law firms are using AI language tools to sift through thousands of case documents, replacing the work of lawyers and paralegals. AI legal assistants help lawyers analyze documents, memos and contracts in minutes.

The legal artificial intelligence software market could grow from $1.3 billion in 2022 to as much as $8.7 billion in 2030, according to an industry analysis by market research firm Global Industry Analysts. A Goldman Sachs report in April estimated that 44 percent of legal jobs could be automated away, more than any other sector except administrative work.

But these money-saving tools can come at a price. Some AI chatbots are prone to fabricating facts, causing lawyers to be fired, fined, or have cases thrown out. Legal professionals are racing to create guidelines for the technology’s use to prevent inaccuracies in bringing together large cases. In August, the American Bar Association launched a year-long task force to examine the effects of AI on the practice of law.

“It’s revolutionary,” said John Villasenor, a senior fellow at the Brookings Institution’s Center for Technological Innovation. “But it’s not magic.”

AI tools that quickly read and analyze documents allow law firms to offer cheaper services and ease the workload of lawyers, Villasenor said. But this blessing can also be an ethical minefield when it results in high-profile mistakes.

READ MORE  Why responsible implementation of AI technology is critical

Last spring, Lydia Nicholson, a Los Angeles housing attorney, received a legal brief in connection with her client’s eviction case. But something seemed off. The document cited court cases that did not ring a bell. Nicholson, who uses they/them pronouns, did some digging and realized many were fake.

They discussed it with colleagues, and “people suggested, ‘Oh, that seems like something AI could have done,'” Nicholson said in an interview.

Nicholson filed suit against the Dennis Block law firm, a prominent California eviction firm, pointing out the errors. A judge agreed after an independent investigation and fined the group $999. The firm accused a young, newly hired attorney in its office of using “online research” to write the decision and said she had resigned shortly after the complaint was filed. Several AI experts analyzed the briefing and proclaimed it “probably” generated by AI, according to media site LAist.

The Dennis Block company did not return a request for comment.

It’s not surprising that AI chatbots invent legal citations when asked to write a brief, said Suresh Venkatasubramanian, computer scientist and director of the Center for Technology Responsibility at Brown University.

“What’s surprising is that they ever produce anything remotely accurate,” he said. “That’s not what they’re built to do.”

Rather, chatbots like ChatGPT are designed to create conversations after being trained on vast amounts of published text to compose plausible-sounding responses to almost any prompt. So when you ask ChatGPT for a legal brief, it knows that legal briefs include citations — but it hasn’t actually read the relevant case law, so it puts up names and dates that seem realistic.

Referees struggle with how to deal with these errors. Some ban the use of artificial intelligence in their courtroom. Others ask lawyers to sign pledges to disclose whether they have used artificial intelligence in their work. The Florida Bar Association is considering a proposal to require lawyers to have a client’s permission to use artificial intelligence.

A point of contention among judges is whether honor codes that require lawyers to swear to the accuracy of their work apply to generative AI, said John G. Browning, a former Texas District Court judge.

Browning, who chairs the Texas Bar Association’s artificial intelligence task force, said his group is weighing a handful of approaches to regulating its use, such as requiring lawyers to take professional training courses in the technology or considering specific rules for when evidence generated by artificial intelligence may be included.

Lucy Thomson, a D.C.-area attorney and cybersecurity engineer who chairs the American Bar Association’s AI task force, said the goal is to educate lawyers about both the risks and potential benefits of AI. The bar association has not yet taken a formal position on whether AI should be banned in courtrooms, she added, but its members are actively discussing the issue.

READ MORE  China, South Asia increase technology transfer, collaborative innovation

“Many of them believe that it is not necessary or appropriate for judges to ban the use of AI,” Thomson said, “because it is just a tool, like other legal research tools.”

Meanwhile, artificial intelligence is increasingly being used for “e-discovery” – the search for evidence in digital communications, such as emails, chats or online workplace tools.

While previous generations of technology allowed people to search for specific keywords and synonyms across documents, today’s AI models have the potential to make more sophisticated inferences, said Irina Matveeva, head of data science and AI at Reveal, a Chicago-based legal technology company . For example, generative AI tools could have allowed a lawyer in the Enron case to ask, “Were there any valuation concerns at Enron?” and get answers based on the model’s analysis of the documents.

Wendell Jisa, Reveal’s CEO, added that he believes that in the coming years, AI tools will “bring true automation to the practice of law — eliminating the need for the human interaction of everyday lawyers clicking through emails.”

Jason Rooks, chief information officer for a Missouri school district, said he began to be overwhelmed during the coronavirus pandemic with requests for electronic records from parents contesting custody battles or organizations suing schools over their Covid-19 policies. At one point, he estimates, he was spending close to 40 hours a week just going through emails.

Instead, he turned to an e-discovery tool called Logikcull, which it says uses AI to help sift through documents and predict which ones are most likely to be relevant to a given case. Rooks could then manually review the smaller subset of documents, cutting the time he spent on each case by more than half. (Reveal acquired Logikkull in August, creating a legal technology company valued at more than $1 billion.)

But even using AI for legal grunt work like e-discovery comes with risks, said Venkatasubramanian, the Brown professor: “If they’ve been subpoenaed and they produce some documents and not others because of a ChatGPT bug — I’m not a lawyer, but that could be a problem.”

Those warnings won’t stop people like Crabill, whose mishap with ChatGPT was first reported by Colorado radio station KRDO. After he filed the flawed motion, the case was thrown out for unrelated reasons.

He says he still believes artificial intelligence is the future of law. Now he has his own firm and says he’ll likely use AI tools designed specifically for lawyers to help with his writing and research, rather than ChatGPT. He said he doesn’t want to be left behind.

“It’s no use being a naysayer,” Crabill said, “or being against something that will invariably be the way of the future.”