Ethics Corner Article
Dear Ethics Committee:
Can I use generative artificial intelligence (AI) in my practice to assist with drafting documents? In particular, can I use generative AI programs, like ChatGPT, to prepare correspondence, pleadings, memoranda, e-mails, etc.? Are there other ethical considerations that apply to the current state of generative AI, when it comes to drafting documents? Will there be a day when I have to incorporate generative AI into my practice?
To some extent, the answer to these questions is yes (with a few caveats):
The truth is, lawyers have been using artificial intelligence (AI) to help us draft documents, e-mails, and correspondence, for some time. As the author is typing this, Microsoft Word is already finishing the partially-typed words and suggesting the next word in the sentence; and that is how ChatGPT essentially functions, only on a much larger scale (and with much more information at its disposal to learn from). However, there are some key differences between the predictive text we’re familiar with in programs like Word and Outlook, and the outside generative AI programs like ChatGPT that have emerged in recent years.
It seems inevitable that AI will continue to become an integral part of our law practices. As more advanced AI becomes available, such as ChatGPT and other types of “generative AI” – i.e., AI that is capable of continuously learning, and generating text, images, and other data – many lawyers are wondering, can I use generative AI in my practice? Will the day come when I have to use generative AI?
Lawyers spend a lot of time drafting documents: correspondence, pleadings and memoranda, contracts, deeds, estate plans, employment handbooks, and more. This article will address some of the ethical considerations that must accompany a lawyer’s decision to use generative AI to assist with these everyday tasks, beyond the simple predictive text features mentioned above. At present, enterprise-level generative AI is still in its infancy, meaning that within the next few years, software will likely be developed that allows law firms to keep the generative AI “in-house” – ensuring protection of client information, while allowing lawyers to benefit from the continuous learning models utilized by generative AI. For now, we are limited to “outside AI” – programs like ChatGPT, DALL-E, Google Bard, Perplexity, ChatSonic, and many others still in development. This article will address the ethical considerations around using these outside generative AI models to draft documents in our everyday practice.
Competency (Rule 1.1)
This Rule requires that a lawyer provide competent representation to a client, which includes, among other things, “attention to details . . . necessary to assure that the matter undertaken is completed with no avoidable harm to the client’s interest.” N.H. R. Prof. Conduct 1.1(b)(5). If you are going to use generative AI in your practice, you should endeavor to understand how it works, and in particular, what are its limitations, so as to avoid any potential harm to your client’s interests. In addition, the lawyer must take steps to verify any information obtained from generative AI – consider, for example, the cautionary tale of the New York lawyer who used ChatGPT to locate case law, failed to verify the authenticity of the case law (which turned out to be fictitious, these are known as “hallucinations” in generative AI), and included that case law in filings with the court. See Mata v. Avianca, No. 22-CV-1461 (PKC), 2023 WL 4114965 (S.D.N.Y. June 22, 2023). As part of their duty to provide competent representation, (and candor to the tribunal, discussed below), lawyers must verify the authenticity of any information produced by ChatGPT before using it to assist with drafting documents. Generative AI may produce false or incomplete information, as discussed above and below, and it is up to lawyers to ensure they are providing competent representation to clients.
Client Communications (Rule 1.4)
This Rule requires a lawyer to “reasonably consult with the client about the means by which the client’s objectives are to be accomplished.” Rule 1.4(a)(2). If the lawyer intends to use AI as a means to accomplish the client’s objectives, the lawyer should “promptly inform” the client of that intention in order to obtain the client’s informed consent. ABA Resolution 112 (2019)[1] states that a lawyer “should obtain approval from the client before using AI, and this consent must be informed.” ABA Resolution 112 (2019)(note that ABA resolutions are not mandatory authorities, but are considered persuasive by some courts – see, e.g., Stock v. Schnader Harrison Segal & Lewis LLP, 35 N.Y.S.3d 31, 44 (2016) (relying in part on an ABA resolution in interpreting the scope of attorney-client privilege). In addition, Resolution 112 provides that “[i]n certain circumstances, a lawyer’s decision not to use AI also may need to be communicated to the client if using AI would benefit the client.” (emphasis in the original). As many clients are starting to use AI on their own, it is recommended that lawyers have a discussion with their clients about their intended use (or non-use) of AI and the risks associated with its use, such as the impact of clients’ potential disclosure of confidential or privileged information to AI programs.
Confidentiality (Rule 1.6)
This Rule is especially important to consider when using outside generative AI, because disclosing any confidential client information to a generative AI tool, without the client’s informed consent, would constitute a violation of the Rule. See N.H. R. Prof. Conduct. 1.6(a) (“A lawyer shall not reveal information relating to the representation of a client unless the client gives informed consent, the disclosure is impliedly authorized in order to carry out the representation, or the disclosure is permitted by paragraph (b).”). A lawyer should not assume that disclosure of confidential information for the purpose of using generative AI is impliedly authorized by the client. See ABA Resolution 112 (2019). Instead, informed consent is required. Recall that by definition, “informed consent” means the agreement by a person to a proposed course of conduct after the lawyer has communicated adequate information and explanation about the material risks of and reasonably available alternatives to the proposed course of conduct. N.H. R. Prof. Conduct 1.0(e) (emphasis added). Similar to Rule 1.4, before disclosing any client information to a generative AI program, the lawyer must communicate “adequate information and explanation about the material risks of” – and “reasonably available alternatives to” – the use of generative AI. There may be situations where a client is already sophisticated in the material risks of using generative AI, and may consent to such use for the purpose of creating draft documents. However, using ChatGPT to draft response e-mails to opposing counsel would be prohibited, because it requires disclosure of the lawyer’s strategy in responding to opposing counsel, which is “information relating to the representation of a client” – and thus, confidential information under Rule 1.6. Relatedly, lawyers should generally avoid using generative AI to draft pleadings for the court, because doing so will almost always entail disclosure of confidential client information to the outside AI program.
Candor to the Tribunal (Rule 3.3)
As discussed above, lawyers have already landed in hot water for using ChatGPT to assist with drafting pleadings where it turns out the information provided by ChatGPT is false.[2] See, e.g., Larry Neumeister, “Ex-Trump lawyer Michael Cohen says he unwittingly sent AI-generated fake legal cases to his attorney”, THE ASSOCIATED PRESS, Dec. 29, 2023 (accessed March 13, 2024) (Disbarred former lawyer Cohen explained to the court, “As a non-lawyer, I have not kept up with emerging trends (and related risks) in legal technology and did not realize that Google Bard was a generative text service that, like Chat-GPT, could show citations and descriptions that looked real but actually were not … Instead, I understood it to be a super-charged search engine and had repeatedly used it in other contexts to (successfully) find accurate information online.”).
Rule 3.3(a)(1) prohibits a lawyer from knowingly making “a false statement of fact or law to a tribunal or fail[ing] to correct a false statement of material fact or law previously made to the tribunal by the lawyer.” As it has become more widely known that ChatGPT and other generative AI models have “hallucinations” on occasion – i.e., responses to questions that provide false information – lawyers cannot rely on generative AI alone to conduct legal research as it may result in the lawyer presenting non-existent case law to the court. To do so is effectively the same as if the lawyer created false cases and citations of their own and presented that “law” to the court as binding or persuasive legal authority. Such conduct is clearly prohibited by Rule 3.3, Rule 1.1, and Rule 8.4 (which prohibits lawyers from engaging in conduct that involves fraud, dishonesty, deceit, or misrepresentation). If lawyers intend to use AI for preliminary legal research, they must be diligent in cite-checking (and as mentioned above, ensuring no confidential client information is disclosed to the generative AI program). As a general reminder, even without the use of AI, lawyers have an ethical responsibility to confirm the validity of citations and case law presented to the court.
Responsibilities of Lawyers and Duties to Supervise (Rules 5.1-5.3):
Collectively, these three Rules generally deal with duties related to supervising subordinate lawyers and other staff in their conduct. For example, Rule 5.1 requires each lawyer with managerial authority to make reasonable efforts to ensure that lawyers in the firm follow the New Hampshire Rules of Professional Conduct, whereas Rule 5.3 deals with supervising non-lawyer assistance. Rule 5.2 provides that subordinate lawyers are obligated to follow the New Hampshire Rules of Professional Conduct, and, except under limited circumstances, can be found to have violated the Rules even though they may have been directed to do so by a supervising lawyer. See Rule 5.2 (explaining that a lawyer is “bound by” the Rules “notwithstanding that the lawyer acted at the direction of another person” except that “a subordinate lawyer does not violate the Rules … if that lawyer acts in accordance with the supervisory lawyer’s reasonable resolution of an arguable question of professional duty.”).
Notably with respect to Rule 5.3, the ABA changed the title of the Model Rule in 2012 from “Responsibilities Regarding Nonlawyer Assistants” to “Responsibilities Regarding Nonlawyer Assistance” to clarify “that the scope of Rule 5.3 encompasses nonlawyers whether human or not.” See ABA Resolution 112 (2019) (citations omitted) (emphasis added). New Hampshire adopted this same change in 2015. As noted in the ABA Resolution, this language is applicable to our new generative AI environment. Consistent with the Rule, it is incumbent upon lawyers to supervise the work of AI used in connection with providing legal services, such as making sure the use of AI does not disclose client confidential information, and that the AI-generated information used is accurate and complete. See ABA Model Rule 5.3, cmt. 3 and 4.
When can we use outside generative AI to draft documents?
There are some circumstances where generative AI may be used ethically by lawyers, including when the lawyer knows there will be no violation of the Rules, and especially where a client has given informed consent. Before using AI to draft documents, however, lawyers should review the AI’s terms and conditions to understand fully how it works, and to avoid any potential copyright infringement or other issues that could potentially harm the client or result in a violation of the Rules.
Examples of potentially ethical uses of AI to draft documents:
- First drafts of generic documents such as litigation hold letters (as long as they do not contain any client or matter identifying information, and as long as the lawyer reviews for accuracy and completeness);
- First drafts of contracts (again, as long as they do not contain client or matter identifying information, and are reviewed by the lawyer);
- Initial template creation (same caveats as above);
- Initial drafts of generic discovery requests;
- Drafting legislation;
- Marketing materials, blog posts, or e-blasts to clients
Once AI services begin to move in-house at firms, where client confidences can be sufficiently guaranteed, it is likely that such software will be integrated more closely into day-to-day practices for lawyers – including for contracts, estate plans, trusts, pleadings, memoranda, correspondence, deeds, easements, discovery drafting, subpoenas, and more. It will always be up to lawyers, however, to ensure that AI-generated content is up to par in order to protect our clients’ interests.
What other ethical considerations apply to the current state of generative AI, when it comes to drafting documents?
Consistent with the ethical obligations of competence and diligence, we should be vigilant in safeguarding the integrity of documents we only produce, but also those we receive from others.
Consider, by way of example, a family law practitioner who receives documents from an unrepresented party on the other side, containing photographs or even video of their client doing or saying something that could hurt their case. The lawyer may not want to take these documents at face value – because in truth, it is possible those documents were created using generative AI.
Many of us have heard of the phrases “hallucinations” or “deep fakes”. These describe something (a response, document, photograph, video, or audio recording) created by generative AI that is not true or did not happen, or depicting something that is not true or did not happen. It is important for lawyers to understand that these exist and could impact not only their use of AI, but the potential use of AI by other parties (including their clients).
“[A]s AI becomes more advanced, it will be used by lawyers to detect deception.” ABA Resolution 112 (2019). AI is already being utilized in the e-discovery context to determine whether documents may be missing from a production of documents. Now and in the future, and as required by a lawyer’s duty of competence, lawyers should be prepared to address AI’s potential impact on discovery and documents received from others.
Will the day come when I have to incorporate generative AI into my practice?
The ABA has indicated that this is a possibility. Consider Rule 1.5, which requires a lawyer’s fees to be reasonable. ABA Resolution 112 (2019) states that “[f]ailing to use technology that materially reduces the costs of providing legal services arguably could result in a lawyer charging an unreasonable fee to a client.” For now, the risks of using AI often outweigh its potential benefits. However, as AI continues to develop and becomes more tailored – and less risky – for lawyers to use, it is worth considering the ways AI can help reduce client costs and increase the lawyer’s efficiency and effectiveness. One helpful resource for lawyers to follow is the ABA’s Task Force on Law and Artificial Intelligence, and the NHBA has also created a Special Committee on Artificial Intelligence that will be providing guidance for NH lawyers on best practices for the use of AI.
This Ethics Corner Article was submitted for publication review to the NHBA Board of Governors at its June 7, 2024 Meeting. The Ethics Committee provides general guidance on the New Hampshire Rules of Professional Conduct and publishes brief commentaries in the Bar News and other NHBA media outlets. New Hampshire lawyers may contact the Committee for confidential and informal guidance on their own prospective conduct or to suggest topics for Ethics Corner commentaries by emailing the Ethics Committee Staff Liaison at: ethics@nhbar.org
[1] https://www.americanbar.org/content/dam/aba/directories/policy/annual-2019/112-annual-2019.pdf
[2] https://www.npr.org/2023/12/30/1222273745/michael-cohen-ai-fake-legal-cases