By Misty Griffith

The evolution of language-based, generative artificial intelligence (AI) is as rapid as the generation of text on the screen by ChatGPT, possibly the most well-known AI of this kind. My Bar News colleagues, Tom Jarvis, Donna Parker, and I watched in awe as ChatGPT created a suc­cinct and intelligible response to Tom’s prompt, “Can you write a short article about using ChatGPT to write legal briefs?” Words flew across the screen producing a one-page response in less than a minute. The result is repeti­tious, over-simplified, and lacks eloquence; yet it is simul­taneously amazing. (Read ChatGPT’s unedited answer in the sidebar below and judge for yourself.)

As I commenced writing this article, it was a cau­tionary tale about the shortcomings of ChatGPT 3.5 (GPT-3.5). Subsequent developments, most notably the launch of the new and significantly improved ChatGPT 4.0, ren­dered my research out of date. New developments have occurred weekly, and sometimes daily, as I have worked on this article. Now, the cautionary tale is that noteworthy developments may have occurred subsequent to the sub­mission of this article.

OpenAI unveiled the GPT-3.5 platform for public use on November 30, 2022. However, usage re­ally took off on February 7, 2023, when Mi­crosoft Bing integrated ChatGPT into their search engine. GPT-3.5 showed potential as a time-saver but produced far-from-reliable results for legal research. GPT-3.5 failed the bar exam, but more problematic, it would sometimes reference nonexistent laws or cases.

On March 14, 2023, OpenAI debuted ChatGPT 4.0 (GPT-4), which is exponen­tially more advanced than its predecessor.

Notably, GPT-4 passed the Uniform Bar Exam (UBE) with flying colors. The chatbot’s score of 297 was in the 90th per­centile, ranking it among the top 10 percent of exam takers. While it is not error-free, neither are the humans. GPT-4 got 75.7 percent of the multiple-choice questions correct compared to the human test takers average of 68 percent.

While passing the bar exam is an im­pressive feat, bar passage is a minimum requirement for practicing law. GPT-4 and other similar generative AI platforms do not possess the creativity, strategic thinking, empathy, and passion of a good attorney. Likewise, while GPT-4 can generate text, its writing lacks sophistication and nuance.

The enhanced capabilities of GPT-4 have sparked concerns that AI technology may eventually replace human lawyers. A March 16, 2023 survey of 4,180 respon­dents (including 1,176 attorneys, 1,239 law students, and 1,765 consumers) conducted by LexisNexis Legal and Professional, found that 39 percent of attorneys, 46 per­cent of law students, and 45 percent of con­sumers believe that generative AI tools will significantly transform the practice of law.

Attorney John Weaver, chair of the Artificial Intelligence Practice at McLane Middleton and a member of the Cyberse­curity and Privacy Group says, “I think one of the reasons ChatGPT is getting so much attention right now is both because it seems pretty revolutionary, and is in many ways, but also because it’s coming for white collar workers. The fact that there’s now the pos­sibility of software that can generate a lot of their work product, a lot of the things that they work on, is unnerving to a lot of people that consider these things, write about them, or might have thought that their jobs were safe.” Weaver, a prominent voice in the field of artificial intelligence law, is on the Board of Editors for RAIL: The Journal of Robot­ics, Artificial Intelligence & Law.

Also in March, Casetext, a legal tech­nology company, introduced Co-Counsel, the first AI legal assistant. Powered by GPT- 4, Co-Counsel is a tool designed to automate and streamline legal research and drafting of legal documents. Utilizing AI for research and first drafts can save a significant amount of time. While lawyers might use AI tech­nology as a starting point for research or drafts, it is imperative that they verify any information generated.

The ChatGPT website includes the fol­lowing warnings about its limitations: “May occasionally generate incorrect information. May occasionally produce harmful instruc­tions or biased content. Limited knowledge of the world and events after 2021.”

A major caveat for anyone doing legal research is that AI may quote a dissent in a case without indicating that the quote is from a dissenting opinion. Additionally, it may cite cases which have been overturned. AI-generated research may be a starting point, but it is not a reliable ending point.

The use of ChatGPT, Co-Counsel, Google Bard, or other generative AI cre­ates ethical concerns which will need to be addressed. Attribution of work is an obvi­ous concern, and the legal community will need to grapple with implications relating to attorney-client privilege if disclosing confi­dential client information to an AI chatbot. Though AI is not sentient, at some level there is human oversight which raises pri­vacy concerns for information shared. Rule of Professional Conduct 1.6 regarding con­fidentiality of information should come into play and would seem to necessitate a cli­ent’s informed consent if an attorney plans to utilize generative AI on a specific client matter.

At the 2023 ABA Midyear Meeting in February, the House of Delegates adopted a resolution addressing attorney accountabil­ity and transparency regarding AI. Resolu­tion 604 sets forth these guidelines:

  • Developers of AI should ensure their products, services, systems, and capa­bilities are subject to human authority, oversight, and control.
  • Organizations should be accountable for consequences related to their use of AI, including any legally cognizable injury or harm caused by their actions, unless they have taken reasonable steps to pre­vent harm or injury.
  • Developers should ensure the transpar­ency and traceability of their AI and protect related intellectual property by documenting key decisions made regard­ing the design and risk of data sets, pro­cedures and outcomes underlying their AI.

While there are legitimate concerns about appropriate use of AI in the legal field, if used appropriately, there are ways in which AI could increase access to justice. In his timely article, The Implications of Chat­GPT for Legal Services and Society, An­drew Perlman, Dean and Professor of Law at Suffolk University Law School opines, “Less complex legal matters may see an even more dramatic shift, with AI tools helping to address the public’s enormous unmet civil legal needs. Technology offers a promising way to address those needs, both through self-help resources and by enabling lawyers to reach far more clients than is cur­rently possible.”1 Dean Perlman is a leading proponent for teaching law students to en­gage with and utilize AI tools responsibly.

The genie is out of the bottle. It is now the responsibility of legal professionals to carefully consider how to harness the power and potential of AI ethically, and in ways that may enhance the future of the legal pro­fession.

Endnote

  1. “The Implications of Chat GPT for Legal Services and Society” Andrew Perlman, Dec. 5, 2022, published online at Social Science Research Network ssrn.com and Harvard Law School Center on the Legal Profession clp.law.harvard.edu.