In today’s column, I am continuing my ongoing and extensive coverage of AI and the law, see my prior comprehensive analysis at the link here and other akin discussions at the link here, and specifically aim to take a close look at an exciting new release by the esteemed American Bar Association (ABA) known as Formal Opinion 512, an important document titled as “Generative Artificial Intelligence Tools” guidance for lawyers.
Boom, drop the mic.
If you aren’t sure why I proclaim that this is exciting, allow me to quickly clarify and expound on the weighty matter.
You might vaguely be aware that some attorneys have been sloppily faulty in using generative AI when preparing their legal briefs and court preparations. For example, a now classic instance consisted of two attorneys who formally submitted papers to the court that contained fictitious legal citations and precedents per generative AI, ultimately getting them into hot water with the judge, see my detailed review of the incident at the link here. The lawyers had failed to double-check the output from generative AI and merely shuffled it along into their formal submission to the court.
Generative AI has at times been known to concoct various legal cases and quotations, doing so in a manner that has popularly become described broadly as susceptible to AI hallucinations (a term that I disfavor due to anthropomorphizing AI, but we are stuck with the catchy phrase, see my explanation about the real issues at play, per the link here and the link here).
Back to the two lawyers. It was a bad oversight on their part, a bad way to practice law, and sadly something that seems to keep on happening as other attorneys throughout the US (and the world) repeat the same mistake again and again, see my discussion at the link here. You would think that word-of-mouth might spread wide and far on these gotchas. Nope, apparently not far enough and not wide enough.
Sad face.
Well, we now have a proverbial laying down of the law, the dropping of the boom, the pounding of a stake in the ground, namely that the ABA which was “founded in 1878 as a commitment to set the legal and ethical foundation for the American nation” (quoted per the ABA website), did this bold and earnestly awaited action on July 29, 2024:
- “In its first guidance on use of artificial intelligence, the American Bar Association Standing Committee on Ethics and Professional Responsibility has released a formal opinion urging lawyers to be mindful of several model rules, including ones related to competency, communications, confidentiality and fees.” (per the ABA website).
- “This opinion identifies some ethical issues involving the use of GAI (generative artificial intelligence) tools and offers general guidance for lawyers attempting to navigate this emerging landscape.” (per the ABA website).
In case you presume that maybe I am overly enthused about this, allow me a moment to quote Dazza Greenwood, globally noted expert on generative AI for law and legal processes, founder of law.MIT.edu (research) and CIVICS.com (consultancy), as per his excellent and well-worth subscribing to DazzaGreenwood Weblog (see the link here):
- “Yesterday, the American Bar Association (ABA) took a significant step forward in addressing the role of artificial intelligence in the legal profession. On July 29, 2024, the ABA released Formal Opinion 512, providing thoughtful and comprehensive ethics guidance on the use of “Generative Artificial Intelligence Tools” in legal practice. This important opinion represents a pivotal moment in the U.S. legal landscape, signaling a growing recognition of generative AI as a valuable and beneficial technology for the practice of law.” (blog posting “ABA’s Landmark Opinion on Generative AI” by Dazza Greenwood, July 30, 2024).
Right on.
Had to be said, and sure enough, thankfully, Dazza Greenwood has done so.
What The Hubbub Is All About
Let’s begin at the beginning.
Should lawyers be making use of generative AI?
Yes, of course, they should be.
There are numerous advantages and benefits to be had, see my straightforward indication of why lawyers, law partners, and law offices ought to be adopting generative AI, per the link here.
Wait for a second, some opt to say, I will wait until the dust clears and then proceed when I feel the time is right. These are the laggards. They continue to cling to their fax machines with every fiber of their body. Tech is intimidating and mysterious. Let me say something about this. The truth be told, any modern-day legal professional will be behind the times, missing the boat, and otherwise at a distinct and substantive disadvantage if they dare to watch haplessly as the rest of the legal industry adopts generative AI while they have their heads in the sand.
Mark that with yet another sad face.
But the rub is this about adopting generative AI for legal practice.
Those who wantonly adopt generative AI or do so for the sheer sake of proclaiming they are using AI, are setting themselves up for failure. Surely so. It goes like this. A big splash is made that law firm XYZ is pressing their lawyers and legal staff to take up arms via the use of generative AI. No strategy is at play. No training takes place. Nobody considers where, what, how, who, when, or why. The whole kit and kaboodle turn out to be purely chaotic and lacking in sensibility and structure.
Only by some form of miraculous dumb luck is this going to somehow succeed. As they say, a broken clock is correct two times a day. In my experience of coming into these messes as a clean-up crew, the trouble is now double trouble.
The first trouble was doing things the wrong way at the get-go. The second trouble is that the law firm and lawyers therein are burned by what happened and are going to (rightfully) fight tooth and nail on what happens next. Who could blame them? In essence, the difficulty of garnering useful and sensible usage of generative AI for their legal practice has gone not just to zero, it has dropped so far below the line into negative territory that digging your way out is costly and energy-sapping.
A rule of thumb is that adopting generative AI in the legal realm requires attentiveness, smarts, and a systematic methodological approach. See my detailed description at the link here.
Another way to say this is to invoke the Goldilocks principle. The porridge should not be too hot or too cold. It should be just right. When you implement generative AI, do so just right. Do not cram generative AI into every nook and corner out the gate, that’s the porridge is too hot. Do not test the water by some feeble usage that nobody in the law firm cares a dime about, that’s the porridge is too cold. Carefully identify the right circumstances and adopt them in the proper and right way.
There is an added danger afoot for those who are on the fence about adopting generative AI.
I’ve already somewhat hinted at or clued you in about the danger, consisting of having lawyers in a law firm that decide to take the bull by the horns and use generative AI even if the firm isn’t doing anything on the matter. That’s what happened with the two ill-fated attorneys that I mentioned at the opening. You could almost grant these brave souls as being heroic. The problem is that any such attorneys will be groping blindly and likely end up falling for every pitfall and avoidable trap imaginable.
Not only are lawyers of that ilk then subject to possible court sanctions, but they also are bound to alienate the judge (and other neighboring judges) who discover the transgression. No lawyer in their right mind aiming to have a lengthy career in a particular jurisdiction does something in the immediate term of an obvious foot-in-mouth that will put them in the ever-lasting doghouse for the long term with a given court or judge. Chancy. Foolish. Avoidable.
The law firm can also face sanctions and worse still (undeservedly or not) earn a reputation for either trying to trick the court or being slipshod and lacking in professionalism. This can readily become widely known. Budding newbie lawyers won’t want to work there. Heavyweights won’t want to go there. Plus, it is conceivable that existing and prospective clients might hear about it too. That being said, they might not know what happened, and just generally be aware of or advised that the particular law firm does not have its ducks lined up, in general. The generative AI guffaw becomes an all-encompassing taint on the law firm.
Alright, what can be done about all this?
Perhaps the surest way to rivet the attention of busy lawyers is by having the true maker of the rules come out and pronounce what the rules are. You see, so far, there have been some legal state bars that laid down rules, plus the existing ABA rules have been informally and variously stretched into the generative AI realm, see my coverage at the link here, but without the ABA formally putting the rules in bright lights and with the suitable fanfare of formality, lawyers could (and are) still ignoring all the warning signs.
Thus, a round of applause goes to the newly released ABA Formal Opinion 512. The sheriff has come to town and posted a stern, completely official warning poster for all to see. Ignore at your dire peril. Read it or weep.
Let’s dig into it.
Unpacking Formal Opinion 512
The newly released Formal Opinion 512 “Generative Artificial Intelligence Tools” guidance document is fifteen pages in length and includes the kind of customary footnotes that you would find in any bona fide legal narrative. I am not going to cover the entire document and will highlight the key components.
Those of you in law school, those lawyers wanting to be up-to-date, and others interested in the full details ought to consider reviewing the whole document, available at the ABA website on the link here.
I’ll boil things down to these crucial segments:
- (A) Competence
- (B) Confidentiality
- (C) Communication
- (D) Meritorious Claims and Contentions and Candor Toward the Tribunal
- (E) Supervisory Responsibilities
- (F) Fees
I will express my own layman’s views of what the document suggests or purports to say.
Do not rely upon my interpretation. Let me repeat that remark. Do not rely upon my interpretation. Hope that settles things. I am telling you right now, that your best bet is to simply for fun and engagement read my off-the-cuff cursory comments as a precursor or warm-up, whetting your appetite, and then proceed to dive directly and purposefully into the actual 15 pages of the document for the cold hard facts.
My suggestion is this. Scour and read the formal document with all the studiousness and fervor that you would for doing advocacy work on behalf of a dear client. And, hey, put down that cup of coffee (an oblique reference to Glengarry Glen Ross).
Your law career might depend upon it.
Oh, did that catch your attention?
Just wanted to get your legal beagle Spidey-sense tingling and pique your interest, thanks.
On with the show.
Part (A): Competence
We shall begin with competence.
In the rules of the ABA, specifically Model Rule 1.1, lawyers are supposed to be competent when representing their clients. Makes sense. A lawyer lacking the necessary overall competence is not going to be suitably able to represent their client. The client will be lacking proper legal representation. This undercuts justice as the client is inadequately being represented by their legal counsel.
The question on the minds of those in the know has been whether lawyers having to know or not know about generative AI (GAI) falls within the rubric of competence.
If generative AI is outside the competence realm, lawyers are presumably officially off the hook, though it can still be a bad look and be problematic. If generative AI is inside the competence sphere, there are a lot of lawyers who are not in compliance with the competence considerations and ought to be doing their darndest to get into compliance by learning about and sensibly using generative AI.
See my overall logic on this?
I trust so.
Formal Opinion 512 says this (excerpt):
- “To competently use a GAI tool in a client representation, lawyers need not become GAI experts. Rather, lawyers must have a reasonable understanding of the capabilities and limitations of the specific GAI technology that the lawyer might use. This means that lawyers should either acquire a reasonable understanding of the benefits and risks of the GAI tools that they employ in their practices or draw on the expertise of others who can provide guidance about the relevant GAI tool’s capabilities and limitations.”
The good news is that this seems to suggest that if a lawyer opts to use generative AI, or maybe is told to do so by their law firm, they need to do so in a competent manner. They can’t just fake it and mindlessly use the AI. This hopefully is a bit of a wake-up call. Law firms that dish out generative AI logins like candy are not meeting the spirit of the rule if they don’t also ensure some form of training or something that also adds competence to the equation.
The unstated part of this is that there is seemingly no requirement to use generative AI per se or that it isn’t considered integral to the overall competence of being a lawyer. A lawyer can choose to seemingly remain aloof from generative AI. The moment they touch it, they are on the hook for some level of competence in using it. If they don’t go near it and aren’t using it, there doesn’t seem to be any bearing on competence considerations.
You can certainly see why that is the seemingly only viable approach right now. If the use of generative AI was somehow mandated or inserted into the basket of competence, a loud outcry would arise. Lawyers would exhort that this is a bridge too far. Do not tell lawyers how to do their jobs in terms of what tools to use. Only give guidance on what to do when using whatever tools they choose to use.
For those lawyers breathing a sigh of relief that they can presumably turn a blind eye when it comes to generative AI, take a long hard look at this portion of the Formal Opinion 512 (excerpts):
- “As GAI tools continue to develop and become more widely available, it is conceivable that lawyers will eventually have to use them to competently complete certain tasks for clients.”
- “But even in the absence of an expectation for lawyers to use GAI tools as a matter of course, lawyers should become aware of the GAI tools relevant to their work so that they can make an informed decision, as a matter of professional judgment, whether to avail themselves of these tools or to conduct their work by other means.”
- “As previously noted regarding the possibility of outsourcing certain work, “[t]here is no unique blueprint for the provision of competent legal services. Different lawyers may perform the same tasks through different means, all with the necessary ‘legal knowledge, skill, thoroughness and preparation.’” Ultimately, any informed decision about whether to employ a GAI tool must consider the client’s interests and objectives.”
This is what I’ve been saying all along. In essence, if you don’t use generative AI, you are potentially undercutting the legal efforts on behalf of your client. The tool is out there. It is ready for use. If you ignore it and do not give due consideration, this will seem to verge toward a failing of a duty of care.
You ought to explicitly know why you aren’t using generative AI. Only then can you explain and justify that you intentionally opted to not use generative AI.
I want to restate what I’ve just said. I am not saying that in all circumstances generative AI is warranted for the legal work you are doing for a client. I am saying that for all legal work that you do, you should consciously and explicitly rule in or rule out the use of generative AI for that work. This should be carefully documented. It might be needed if your client later decides that you seemed to fail in your duties and that had you, in fact, used generative AI in your legal work there would have been a demonstrative difference in proficiency or outcome, and otherwise materially aided their case.
Okay, that covers a smidgeon of the segment on competence. Please take a look at the full document for further twists and turns.
Part (B): Confidentiality
The category of confidentiality is somewhat more apparent as a topic than perhaps the matter of competence that I just covered.
Here’s the deal.
I’ve previously explained that if a lawyer uses generative AI and opts to enter anything about their client, they are taking a big risk of usurping the attorney-client privilege and potentially undercutting the confidentiality of the client, see my coverage at the link here. The reason that the attorney is playing with fire is that the AI makers usually have a clause in their licensing agreement that says the AI maker can examine any prompts or other inputs of a user, and furthermore, they can reuse the content as part of the ongoing data training of their generative AI.
You can plainly discern the blunder a lawyer could make. They might assume that the generative AI is entirely under their own account and control. Enter the client details and see what ingenuous legal strategies the AI comes up with. Oops, they might have just done a no-no on disclosure.
Smarmy attorneys quickly tell me that they will disguise the client data. For example, instead of mentioning Mr. Smith, they enter the name of Mrs. Jones. That kind of subterfuge might work, but you are still getting close to the flames of the fire.
Generally, you ought to find out what the story is on the confidentiality associated with your use of generative AI. I am at times saddened and shocked that even lawyers seem to overlook reading the licensing agreements. A law firm that contracts to use generative AI will typically involve their tech folks to try and ensure that the confidentiality of their data will be upheld. A lawyer who decides to sign up for generative AI that is publicly available needs to give a double and triple look at the licensing, and let me say this, even that won’t necessarily be sufficient (there are still other ways that the confidentiality of data could be usurped).
Formal Opinion 512 says this (excerpts):
- “Before lawyers input information relating to the representation of a client into a GAI tool, they must evaluate the risks that the information will be disclosed to or accessed by others outside the firm.”
- “Lawyers must also evaluate the risk that the information will be disclosed to or accessed by others inside the firm who will not adequately protect the information from improper disclosure or use because, for example, they are unaware of the source of the information and that it originated with a client of the firm.”
I included the second quoted excerpt to highlight that confidentiality is not only vulnerable to the outside world but also to the inside world of a law firm. Bet you didn’t think about that unless you’ve been burned before or otherwise have a heightened awareness about tech.
Okay, once again, that covers a smidgeon of the segment, in this instance on competence. Please take a look at the full document for further twists and turns.
Part (C): Communication
I’ve got a handy question for you to ponder.
If a lawyer opts to use generative AI, should they be obligated to inform their client that this is happening, or is there no need to share such details with a client?
It is a great question and if you’d like to see my analysis of the ins and outs, see the link here. In short, some would argue that you might as well tell the client about the coffee maker your office uses if you are also going to be burdening them with the fact that you are using generative AI. Others retort that you must let your client know about generative AI since it is far beyond any other ordinary tool that might come into play in your legal work. It is far beyond a coffee maker.
The rabbit hole goes pretty far on this. If you do inform a client, are you giving them an option? In other words, they presumably might insist that you aren’t to use generative AI. Now what? You have opened Pandora’s box. Just remain silent on the matter and the client won’t know what’s (presumably) good for them anyway, some say.
Formal Opinion 512 says this (excerpts):
- “The facts of each case will determine whether Model Rule 1.4 requires lawyers to disclose their GAI practices to clients or obtain their informed consent to use a particular GAI tool. Depending on the circumstances, client disclosure may be unnecessary.”
- “Of course, lawyers must disclose their GAI practices if asked by a client how they conducted their work, or whether GAI technologies were employed in doing so, or if the client expressly requires disclosure under the terms of the engagement agreement or the client’s outside counsel guidelines.”
- “There are also situations where Model Rule 1.4 requires lawyers to discuss their use of GAI tools unprompted by the client. For example, as discussed in the previous section, clients would need to be informed in advance and to give informed consent, if the lawyer proposes to input information relating to the representation into the GAI tool. Lawyers must also consult clients when the use of a GAI tool is relevant to the basis or reasonableness of a lawyer’s fee.”
- “It is not possible to catalogue every situation in which lawyers must inform clients about their use of GAI.”
The gist of the rule is that the rule is dependent upon the circumstances at hand. I realize this seems puzzling or rather loosey-goosey.
I would contend that this is promising, maybe surprisingly so, in that the notion of whether to inform or not inform, and whether to get client approval or not, are on the table. I mention this because some lawyers have expressed to me that their law firm has a blanket don’t tell, or a blanket do tell, which to me seems like using a square peg in a round hole. Some law firms haven’t given the matter any thought at all and haphazardly do whatever strikes their fancy at the moment in time. Not a wise move.
I tend to concur with the ABA suggestion of the reasonableness test of how to approach the matter.
Okay, once again, again, that covers a smidgeon of the segment, in this instance on communication. Please take a look at the full document for further twists and turns.
Part (D): Meritorious Claims and Contentions and Candor Toward the Tribunal
I can rapidly cover this one.
Recall that I mentioned the instance of the two lawyers who got themselves in trouble by failing to double-check the generative AI output that they used in a court submission. I’m sure you remember this. Resoundingly memorable.
Courts don’t like that type of activity by lawyers. You might find of idle interest the legal weaseling that some of the lawyers have tried when caught with their hands in the cookie jar. They will plead that the portion was incidental to the case. They will plead that the portion was done with a sincere belief in the truthfulness, and they didn’t realize the content was fictitious. They will plead that the sun was in their eyes. It goes on and on.
This takes up court time needlessly. It makes the court look bad. Most courts take a dim view of this. Some though seem to be hedging and willing to go along with “my dog ate it” excuses. Personally, I prefer a tough love approach and that attorneys ought to be held fully accountable with none of the excuse-making involved, but that’s just me.
The problem emerging now is that lawyers see other lawyers getting away with excuses and figure they might do the same when the time comes. If you give an inch, a mile will soon be taken. Also, without a looming legal sword overhead, the impetus to double-check becomes less demanding. Do you go to the chore of double-checking, which is time you don’t have right now, or bet on a low-risk adverse gotcha that maybe you will essentially skate free with later on?
Water flows to the lowest points.
Anyway, here are some excerpts on this from Formal Opinion 512:
- “Even an unintentional misstatement to a court can involve a misrepresentation under Rule 8.4(c). Therefore, output from a GAI tool must be carefully reviewed to ensure that the assertions made to the court are not false.”
- “Some courts have responded by requiring lawyers to disclose their use of GAI.”
- “As a matter of competence, as previously discussed, lawyers should review for accuracy all GAI outputs.”
- “In judicial proceedings, duties to the tribunal likewise require lawyers, before submitting materials to a court, to review these outputs, including analysis and citations to authority, and to correct errors, including misstatements of law and fact, a failure to include controlling legal authority, and misleading arguments.”
In my mind, I think of the stockade, but that’s just a dream and outside of reality, sorry to say.
Okay, that covers a smidgeon of the segment. Please take a look at the full document for further twists and turns.
Part (E): Supervisory Responsibilities
I dare to imagine that you perhaps relish it when I ask an invigorating question, so let’s try that once more.
Start with this scenario. A law firm adopts a generative AI app. A lawyer in the law firm uses the anointed generative AI app. The lawyer doesn’t double-check the output. The lawyer shovels the output into a legal briefing. It is fictitious. The lawyer submits the briefing to the court.
Assuming the court catches on, is the lawyer to blame, or is the law firm to blame?
You have ten seconds to decide.
Tick tock, tick tock.
Times up.
What did you decide?
You might have thought that the lawyer is responsible. It is their legal work and they submitted something false. The buck stops with them.
You might have instead thought that the law firm is responsible. The law firm handed the keys to the car to the lawyer. If the lawyer crashes into a tree, the law firm should have been careful to either not give keys to the attorney or done something to ensure that they knew how to properly drive the car. The law firm is on the hook. Period, end of story.
Formal Opinion 512 has this to say about law firm managers and supervisors (excerpts):
- “Managerial lawyers must establish clear policies regarding the law firm’s permissible use of GAI, and supervisory lawyers must make reasonable efforts to ensure that the firm’s lawyers and nonlawyers comply with their professional obligations when using GAI tools.”
- “Supervisory obligations also include ensuring that subordinate lawyers and nonlawyers are trained, including in the ethical and practical use of the GAI tools relevant to their work as well as on risks associated with relevant GAI use.”
My viewpoint is that I say thank goodness for this rule.
I say that because some law firms have resistance at the top of the law firm when it comes to adopting generative AI. The top managers and supervisors do so while holding their noses. They seem to think that when the crud hits the fan, they will not be at fault. The individual lawyers and the legal staff will take the fall.
I’ve emphasized something over and over in my consulting and talks, namely, it takes a village to adopt generative AI in a law firm.
The top must be buying in. One means of getting them to buy in is the carrot, such as bonuses or other benefits for doing so, and the other is the stick. They must realize they have a ready stake in the game. They are on the line. This will make them serious and committed.
In terms of the scenario that I outlined, I suppose it is fair to say that both the lawyer and the law firm are culpable. Neither one gets a separate pass. Did the law firm provide suitable training on how to use generative AI and the need to double-check outputs? Did the lawyer not get the training or was no such training provided? And so on.
I’ve done debriefings, sometimes referred to as post-incident analyses or deconstruction activities (ominously, at times known as an autopsy or post-mortem, ugh), trying to ferret out what happened. This can be done to identify ways to improve or strengthen generative AI adoption. The sour and dour approach consists of doing so to find scapegoats and scare people into becoming robot-like when they use generative AI, expunging any novelty or innovation out of the air. Please don’t do that.
Okay, that covers a smidgeon of the segment. Please take a look at the full document for further twists and turns.
Part (F): Fees
You would have to be living in a cave your whole life to not know that billing and fees are the lifeblood of a law firm and typically are the bane of existence and a constant influence on the career of a lawyer. Stress and fees go hand in hand.
How does that come into the picture with generative AI?
I’m glad you asked.
There is a mind-bender involved. Suppose that a lawyer can do their work in half the time by using generative AI. Wonderful, you might exclaim. As a client, you are cutting in half the bill that you would otherwise have gotten, presumably. Pop the champagne (since you have money saved and can use it for the bubbly).
Look at this from the perspective of the lawyer and the law firm. They adopted generative AI and now the billing has dropped in half. We’re just pretending, this is not really the case, so don’t go crazy on this. The law firm in this made-up scenario has taken a sharp blow to its revenue. The lawyers aren’t earning the same dough they once were. Heavens, the world is falling apart.
One approach would be to charge double the fee as usual. In other words, keep the status quo on revenue by increasing the fees. An hour of legal work before is what a half hour of legal work costs now, given the added use of generative AI. Will clients go for this? Think that over.
Another approach would be to increase the workload to offset the drop in workload. A lawyer who did one client in half the time could potentially do a second client, using the leftover time. They can do more legal work and cover more clients than they could before, prior to the generative AI adoption. Your book of business could be potentially boosted. Smiles all around.
Yet another angle would be to charge the client for the generative AI as undertaken by the lawyer and the law firm. Maybe that would even things out.
All kinds of questions arise. Is the generative AI charged on a per-minute basis, a per-hour basis, per transaction or prompt basis, or on a bulk basis? If you aren’t going to charge a client for the generative AI usage, and just consider it part of the legal work, should you tell the client that generative AI is being used or leave that out (this takes us to the Communications aspects once again)?
All in all, generative AI spurs all manner of handwringing when it comes to fees.
Formal Opinion 512 weighs in on the topic (excerpts):
- “Rule 1.5(b) requires a lawyer to communicate to a client the basis on which the lawyer will charge for fees and expenses unless the client is a regularly represented client and the terms are not changing. The required information must be communicated before or within a reasonable time of commencing the representation, preferably in writing.”
- “Therefore, before charging the client for the use of the GAI tools or services, the lawyer must explain the basis for the charge, preferably in writing.”
- “GAI tools may provide lawyers with a faster and more efficient way to render legal services to their clients, but lawyers who bill clients an hourly rate for time spent on a matter must bill for their actual time.”
- “For example, if using a GAI tool enables a lawyer to complete tasks much more quickly than without the tool, it may be unreasonable under Rule 1.5 for the lawyer to charge the same flat fee when using the GAI tool as when not using it. “A fee charged for which little or no work was performed is an unreasonable fee.”
- “In applying the principles set out in ABA Formal Ethics Opinion 93-379 to a lawyer’s use of a GAI tool, lawyers should analyze the characteristics and uses of each GAI tool, because the types, uses, and cost of GAI tools and services vary significantly.”
Those excerpts are merely the tip of the iceberg on the fee’s considerations. It can get complex. Nonetheless, it can be reasonably figured out. Don’t sweat it. Work it out.
Now, for the last time in this dialog, I can say that the above covers a smidgeon of the segment. Please take a look at the full document for further twists and turns.
Conclusion
Congratulations, you have a smattering of an inkling of what the ABA Formal Opinion 512 covers.
What did I just cover in this discussion?
For those of you who opted to skip the above because you thought it was TL; DR (too long, didn’t read), here is the official summary per Formal Opinion 512 of what it purports to cover:
- “Lawyers using GAI tools have a duty of competence, including maintaining relevant technological competence, which requires an understanding of the evolving nature of GAI. In using GAI tools, lawyers also have other relevant ethical duties, such as those relating to confidentiality, communication with a client, meritorious claims and contentions, candor toward the tribunal, supervisory responsibilities regarding others in the law office using the technology and those outside the law office providing GAI services, and charging reasonable fees. With the ever-evolving use of technology by lawyers and courts, lawyers must be vigilant in complying with the Rules of Professional Conduct to ensure that lawyers are adhering to their ethical responsibilities and that clients are protected.”
The crux is that the rules of the road are now posted for all to see. This will undoubtedly provide guidance and shaping to the individual state bars that opt to promulgate their own versions of similar rules.
Meanwhile, I’d ask that you let as many lawyers know, as you can, that the ABA Formal Opinion 512 is posted and awaiting their review. Let me announce it this way. The new phonebook is here, the new phonebook is here (hint, that’s a famous line from a stellar comedy starring Steve Martin).
Or, since a lawyer is of course steeped in the law, remind them of the distinguished line by English jurist, John Selden: “Ignorance of the law excuses no one.”
Read the full article here