The Right to (Human) Counsel: Real Responsibility for Artificial Intelligence
By
By
Keith Swisher[1]*
The bench and bar have created and enforced a comprehensive system of ethical rules and regulation. In many respects, it is a unique and laudable system for regulating and guiding lawyers, and it has taken incremental measures to account for the wave of new technology involved in the practice of law. But it is not ready for the future. It rests on an assumption that humans will practice law. Although humans might tinker at the margins, review work product, or serve some other useful purposes, they likely will not be the ones doing most of the legal work in the future. Instead, AI counsel will be serving the public. For the system of ethical regulation to serve its core functions in the future, it needs to incorporate and regulate AI counsel. This will necessitate, among other things, bringing on new disciplines in the drafting of ethical guidelines and in the disciplinary process, along with a careful review and update of the ethical rules as applied to AI practicing law.
If you were to choose a lawyer to provide important legal advice, which of the following two lawyers would you choose:
Lawyer Kingsfield: this lawyer has handled 30,000 court cases and can readily recall 10,000 of them. He has also reviewed 30,000 statutes and regulations and can readily recall 10,000 of them. He and his paralegal can perform legal research at a rate of 30 new and relevant legal sources per hour. He has also represented 1,000 clients and learned from each of them. He has received five trainings on implicit, unconscious, and cognitive biases, which he endeavors to minimize, although still present. In light of his number of open matters, Lawyer Kingsfield has, on average, 10 hours to dedicate to each matter each week.
Lawyer Automata: this lawyer has handled 3,000,000 cases and can readily recall all of them. She has reviewed 3,000,000 statutes and regulations and likewise can readily recall all of them. Although her current knowledge incorporates almost all relevant legal sources, she can perform new legal research if needed faster than anyone else in the bar. She has also represented 3,000 clients and has learned from each of them. She does not suffer from any implicit, unconscious, or cognitive biases herself (although the legal and factual information on which she and the other lawyers rely may contain such flaws). Lawyer Automata has as much time as she needs to dedicate to each matter.
The choice seems simple: Lawyer Automata. She is more knowledgeable, more competent, less biased, and less time-constrained than Lawyer Kingsfield.[2] Compared to him, Lawyer Automata will be more likely to maximize expected utility, however defined. This includes substantive utility (e.g., the correct legal outcome under the relevant facts and law) and process utility (e.g., absence of bias). In light of Lawyer Automata’s apparently all-around superior position, she is the clear choice to maximize the expected utility.
Sometimes, though, lawyers are asked to make predictions, not just to issue the soundest legal advice. Below is how the two lawyers fare in their predictions in criminal cases:
Lawyer Kingsfield: in predicting whether and for how long a judge will sentence a criminal defendant to prison (an obviously critical question for any criminal defendant deciding whether to take a plea offer or to proceed to trial), Lawyer Kingfield has been 61% accurate in his predictions. He gets it wrong 39% of the time.
Lawyer Automata: in predicting whether and for how long a judge will sentence a criminal defendant to prison, Lawyer Automata has been 95% accurate in her predictions. She gets it wrong 5% of the time.
Here again, Lawyer Automata is the clear choice. She is far more predictively accurate overall than Lawyer Kingsfield.
But I left out one potentially important detail: Lawyer Automata is not human.
Instead, Lawyer Automata is the most advanced form of artificial intelligence (AI),[3] designed and tested to provide the best legal advice, the most accurate predictions, and the most effective advocacy. I also omitted one other important detail: she does not yet exist. This Essay proceeds on the premise, taken as assumed for the sake of argument, that she will exist within 100 years from now. This premise has been rendered all the more plausible by sophisticated AI language programs, such as ChatGPT, IBM’s Watson, or Google’s Bard, which provide answers to complicated questions quickly, clearly, and generally competently; indeed, it can be quite difficult to distinguish this work product from human work product (even though comparable human work product takes much more time to produce).[4] Furthermore, a “robot lawyer” nearly made its appearance this year.[5] Given this assumed premise, we should explore whether Lawyer Automata is indeed the right choice of counsel and, if so, how compelling is that choice.
This Essay thus addresses the ethicality and constitutionality of what seems like an unavoidable future: the availability and advantages of advanced AI counsel to represent clients.[6] In other words, it generally asks whether we have a right to human counsel (if we want it) and how we should ethically regulate AI counsel.[7] My thesis essentially is that what makes lawyers special is legal ethics (as broadly construed below), not simply their legal acumen; AI counsel will undoubtedly exceed human lawyers’ acumen and may arguably replicate their legal ethics, making it suitable and superior counsel (all things considered). Human involvement, however, will be needed to infuse and monitor AI counsel’s ethics and may remain advisable or necessary to facilitate the client or other human relationships.
Part II highlights the potential benefits of AI counsel vis-à-vis human counsel, and Part III highlights the benefits of human counsel vis-à-vis AI counsel, including whether human exceptionalism is preferrable, or perhaps even required, for counsel. Part IV briefly discusses, but of course cannot resolve, the constitutionality of AI counsel, which does not yet exist. Finally, Part V discusses future ethics, i.e., ethical rules and regulation as applied to AI counsel. The regulation of AI practices will likely move from the fringe to the core, at least if legal ethics is to remain central to the practice of law. This will necessitate improvements and adjustments in the disciplinary process and the ethical rules. Furthermore, the modern approach to ethical regulation is to adjust incrementally and somewhat slowly the existing rules, incorporating some references to technology or strengthening or weakening a particular rule relating to human lawyers for example. Simply repeating this minimalist approach will miss the mark: we are at the cusp of an entirely new paradigm, and the existing rules and approaches are inadequate and partly irrelevant to the next-gen practice of law.
AI counsel presents several advantages over a more traditional (human) approach. This Part briefly highlights some existing AI-like applications in the criminal justice system. Along the way, it points out some advantages of these applications over human lawyers (or human lawyers alone), although it is by no means exhaustive of the potential advantages of AI (many of which might not yet even be contemplated).
Although not nearly uniform, criminal courts across the country now use certain algorithms. These predictive or actuarial models influence judicial rulings on, for example, pretrial release and sentencing for criminal defendants.[8] Several jurisdictions now require the use of these new predictive algorithms.[9] One of the primary reasons for this algorithmic infiltration seems commendable: research shows human decision-makers’ susceptibility to implicit and cognitive biases; algorithms promise to reduce or eliminate these biases and errors.[10] For example, owing perhaps to political pressure or subconscious racial bias, judges have sentenced certain racial groups more harshly on average than other groups.[11] Likewise, certain prosecutors and prosecutorial offices have discriminated on the basis of race or social class when deciding whom to prosecute and how severely.[12] An algorithm or future AI, at least in theory, need not suffer from these flaws; it would not discriminate against certain clients or opposing parties.[13]
In addition, harnessing big data, an algorithmic model might discern the most significant factors leading to recidivism (re-offending), more so than a human judge, criminal defender, or prosecutor, who can access and process far less data.[14] Human prosecutors and judges might have been locking up defendants unnecessarily pending trial, even though those defendants would not have committed another crime and would have shown up to trial. Conversely, these human actors might be missing important factors that lead to recidivism and releasing the defendants, thereby putting the community at increased risk of crime. Similarly, AI could more accurately predict what a judge or agency will do in these and other types of matters, which would help clients make more informed and wise decisions. (AI counsel would also more accurately predict the predictions and decisions of other AI or AI-like inputs in the justice system, such as the current actuarial risk models or potential AI judges of the future.[15])
The human decision-making (in)capacity is also related to bounded rationality and the types of cognitive biases (e.g., the availability heuristic) that have historically hindered implementation of pure rational choice theory.[16] Humans simply cannot invariably live up to it. Although sometimes the issue is that the particular rational choice model is insufficiently sophisticated to capture the range of rational and actual human behavior, sometimes humans simply err in their decisions and in their reasoning toward those decisions.[17] AI, in theory, could avoid these flaws and reason closer to perfection.[18]
Another benefit to AI counsel is that certain or all “consumer” ethical issues may be a thing of the past.[19] For example, the lack of adequate client communication is the, or one of the, most common ethical complaints about lawyers today.[20] But AI counsel will presumably have impeccable communication routines and, if questioned, will be able to produce detailed records of that communication to inquiring disciplinary authorities. Similarly, some human lawyers become exhausted, overworked, or distracted, and for these reasons, they miss deadlines or fail to communicate timely with clients and others. AI counsel, in contrast, would never tire and would be ever diligent. To be sure, like humans, AI would need doctors of a sort (technicians or other AI) in the event of a glitch, but otherwise AI counsel would always be working, reliable, and punctual.
The human system of counsel at present also suffers from wildly different performances. Some clients receive excellent counsel, others receive mediocre counsel, and others unfortunately receive terrible (or no) counsel. There is a form of fairness in all clients receiving the same (high-performing) AI counsel, as opposed to randomly different human lawyers with significantly different capabilities and biases at present. AI counsel to scale presents the opportunity to provide all clients (rich and poor) with top-quality counsel. Nearly all agree that clients have a right to effective counsel, yet today that counsel can be expensive and inconsistent. AI counsel thus could even, and perhaps uplift, this playing field.
A related and potentially enormous advantage of AI counsel presumably would be its lower cost and increased access. Much of the discourse today is focused understandably on access to justice for the millions of people who cannot afford or access counsel to help guide them through legal questions, proceedings, and transactions.[21] Once AI counsel is developed and sufficient bandwidth is enabled, AI counsel could serve all of the country’s (or even the world’s) population.[22] It, moreover, undoubtedly will understand every language and will always be available to its clients.
Without being exhaustive, this Part hopefully highlights the strong and in some ways unique potential of AI counsel. This potential includes reduced bias, increased accuracy, fewer “consumer” complaints, and greater access to counsel. The potential presumably will only grow as AI advances in its applications and abilities, transcending the abilities of a human lawyer.
Notwithstanding the disadvantages noted above, the human lawyer still offers a wealth of advantages. An ethical code guides and directs the conduct of lawyers, and violations of the code can result in sanctions (e.g., suspension or disbarment). Furthermore, having typically completed extensive legal (and other) education, interned, and practiced law for years, human lawyers bring significant practical experience (and presumably wisdom) to their cases. They tend also to have leadership and volunteer experience in the law and the community. Finally, they have significant experience living as humans, who of course are the subjects whom the lawyers must advise. For AI counsel to supplant these human lawyers, ideally AI counsel would need to replicate or exceed the advantages of human lawyers.[23] Each area of advantage is addressed briefly below. As we will see, many of these advantages might be replicated (or so we could non-laughably assume for the future), but some human involvement might nevertheless remain necessary for practical reasons.
Human lawyers boast ethical regulation. Legal ethics benefits both the public and the profession.[24] The profession’s list of core values includes loyalty, confidentiality, and the competent exercise of independent professional judgment.[25] In particular, lawyers have the following duties to their clients: “(1) proceed in a manner reasonably calculated to advance a client’s lawful objectives, as defined by the client after consultation; (2) act with reasonable competence and diligence; (3) comply with obligations concerning the client’s confidences and property, avoid impermissible conflicting interests, deal honestly with the client, and not employ advantages arising from the client-lawyer relationship in a manner adverse to the client; and (4) fulfill valid contractual obligations to the client.”[26] The ethical rules, which are mostly uniform across the states,[27] require that lawyers uphold these duties to clients, on pain of disciplinary action (and somewhat relatedly, civil liability).
If AI counsel were to be authorized, it would need to comply with the legal profession’s ethical rules. Else it would be an inferior option for clients and would not protect the public adequately. In addition, in the unlikely (or less likely) event that AI counsel were to err, remedies would need to be available. These challenges are taken up in Part V below. At the moment, only human lawyers follow, and must follow, ethical rules, and this feature is a significant advantage to human lawyers.
But could AI counsel learn and follow the ethical rules? In other words, could AI counsel adequately provide loyalty, confidentiality, and independent professional judgment to clients? AI counsel, in theory, could exhibit each of these important, but mostly unspecified, duties. Indeed, for certain duties—e.g., the absence of bias or the requirements of competence and diligence, as the Lawyer Automata introduction suggests—AI counsel might be better suited than the human lawyer. To be sure, whenever we attempt to “code” values, we invite disputes as to the meaning and scope of those values, but this is not an issue unique to AI. One set of values will be seeded for AI counsel (which in turn may have the power to expand on or refine this set), just as one set is engrained in a human lawyer. Perhaps AI counsel’s prowess would even enable it to utilize and reconcile multiple perspectives on these values.[28] Furthermore, human lawyers of course sometimes fail to apply or honor the values that they outrightly hold; AI, however, cannot disregard its embedded constraints, at least not at present.[29] At a minimum, AI counsel will display, consistent with its coding, a vision of loyalty, independent professional judgment, diligence, and so forth. Thus, this challenge at first blush does not seem insurmountable.
As the rowdy debate over algorithmic fairness and how to code it illustrates, however, whether we can adequately code sometimes-conflicting values, such as loyalty or independence, is an open question. Putting aside current technical limitations, the debate seems misguided. A human lawyer does not implement all visions of loyalty or independence, only one or a few. AI counsel likely could be coded with or learn this. Moreover, humans will impart their concept(s) of these values to AI counsel; it need not be allowed to create its own. I also explore below whether an objection to AI counsel is, at its core, some sort of human species exceptionalism—in other words, that only humans should counsel other humans (even though human lawyers and judges opine on and judge other species, e.g., they decide what human party owns livestock in a case or who has the right to deforest a parcel of land).
In sum, if AI counsel could not meet our current or future standards of legal ethics, AI counsel should remain only a tool of a human lawyer to review and supervise. Even if the public might be increasingly accepting of AI’s competence,[30] failing to live up to legal ethics would reveal a grave deficiency for AI counsel’s independence. In that event, so long as a human lawyer remains actively involved and retains the ultimate say in the decision, using AI counsel would be permissible; indeed, a broad use of technology is already quite common in law.[31] Moreover, this continued human involvement would likely meet any lingering constitutional or human-exceptionalism worries.
Human lawyers also have a vast array of legal and practical knowledge, skill, and experience that they bring to bear. Although some of these features may cause the lawyer to have implicit and cognitive biases that AI counsel would presumably lack, it seems safe to say on balance that these features are an advantage or, at the least, unique and potentially advantageous. The question, then, is whether AI can be coded with, or learn, these aspects of the human lawyer. In light of AI’s almost infinite learning capacity in theory and the ability of humans to test the AI extensively before deploying it on the parties, this hurdle may well be cleared. Indeed, AI counsel could be modeled on moral and legal exemplar (human) lawyers. In other words, its relevant inputs could come from the best human lawyers, and although speculative, it may even learn to exceed them.
Furthermore, AI counsel will presumably have to pass millions of simulations (more so than any human lawyer ever has) before being authorized to practice law. With its processing prowess, AI counsel would have the ability to represent millions of clients across the state, country, or globe, quickly becoming the most experienced lawyer in history. Of course, this discussion rests heavily on the assumption with which we began—that a currently non-existent form of AI will come into existence with advanced capabilities such that AI counsel might be able to, for example, create effective legal arguments, understand human emotions, and reach practically laudable solutions. For purposes of thought, we can assume that the AI can learn everything from the practically wisest human lawyers and judges, and AI counsel will likely be able to learn from its prior experiences, as humans do. If not, AI counsel will presumably make practically unwise or unrealistic decisions, which would hinder or preclude a transition from human lawyers.
I will devote the most attention to a final, related, and perhaps primary worry: that AI counsel would not be human and would therefore lack an inherent human legitimacy, capacity, or relation. Many scholars and commentators resist AI decisions, preferring human decisions for various reasons.[32] If we stipulate that AI decisions would be more accurate, however, we can clear away many of the technical concerns. Additional objections remain, and thus those objections must not reside (or not reside completely) in the substance or outcome of the decisions or actions but rather in the process, including something about the nature of the decision-maker (human v. AI). Perhaps certain opponents are also simply using a form of reasoning akin to evidential decision theory[33]: they just do not want the news that they will be counseled by AI or that AI counsel exceeds certain human capacities, and thus they choose the human lawyer between the two, even though the human lawyer renders (under our stipulation) potentially worse counsel.[34] In any event, the motivations for the resistance seem plentiful, but we should interrogate what justifies this human exceptionalism.
Among many other arguments, one new and interesting way to justify scholars’ human bias follows:
[I]n a liberal democracy, there must be an aspect of “role-reversibility” to certain judgments. In some contexts, those who exercise judgment should be vulnerable, in reverse, to its processes and effects. And those subject to its effects should be capable, reciprocally, of exercising judgment.[35]
Although perhaps in the neighborhood of a solid justification for the human preference, it ultimately appears to miss the mark.
Following this theory, the authors note that it “provides a ready-made answer for when it could become normatively acceptable for robots to don judicial robes, serve on juries, and occupy other democratic decision-making roles: when they interchangeably become robo-defendants.”[36] But this could apparently be fixed immediately by enabling punishment, even if unlikely, on the artificially intelligent. For example, if it errs, it (or its creators) could be sued for negligence or prosecuted for its crimes, akin to the current civil and criminal liability of corporations and other entities. Its punishment could include reduced use or deactivation for a period of years or even the robo-death penalty: deletion. My hunch, however, is that this reply will not eliminate the concerns of the authors or the many others who do not want AI making key decisions over humans or serving as counsel to humans.
The authors do tap into a seemingly shared intuition that only humans should judge humans, or for our purposes, only humans should counsel humans. Whether that owes to species exceptionalism or some type of equality, we sense that it would be inappropriate for a robot to, say, sentence a human being to prison (even if the robot was acting in full compliance with the law, which had been crafted and implemented by humans), or serve as lead counsel in a trial. Part of this setup is descriptive, and no reply seems sufficiently magical to alter this description. That said, we already permit, happily or begrudgingly, a range of AI decisions, even certain “high stakes” decisions.[37] We also permit a wide of array of human differences and hierarchies to pervade the human attorney-client relationship. For example, attorneys across the country tend to be richer and less diverse than the population they counsel.[38] Robots do not possess these potentially problematic differences. Furthermore, robots can be designed so as not to suffer from unconscious and cognitive biases and, in this sense, are fairer and more rational. Thus, although AI counsel of course have more differences overall when compared with human lawyers, it is clean of certain controversial and likely negative differences.[39]
In light of these observations and assumptions, it seems to me that a more plausible justification for this arguable “robophobia”[40] is not that robots are insufficiently participatory in our democracy or that they must be “vulnerable” to the processes they oversee or counsel (and thus, under that theory, they could not currently serve as judges, lawyers, or jurors).[41] Instead, a potentially related but stronger justification cuts closer to the flesh, somewhat literally. Law often involves violence; it “takes place in a field of pain and death.”[42] Thus, especially (but not exclusively) in criminal law, judicial cases cause state-imposed pain on the defendant (e.g., years in prison or even death).[43] To be sure, the judge (not the prosecutor or defender) ultimately issues the punishment or judgment, but the lawyers are the ones guiding and advising the clients through this precarious process. Given this pervasive element of pain, AI counsel perhaps should have the ability to understand and suffer pain, at least something very roughly similar to the types of punishments its clients face in the justice system.[44] This quality would give it an important sense of empathy to the defendant and perhaps temper or otherwise alter its advice.[45] Even if this capacity would not alter the advice or advocacy, it might be more relatable to clients and assure their confidence.
To be sure, the incomparability of pain (dis)utility between individuals (including between robots and humans) remains unsolved,[46] but solving that elusive puzzle does not seem necessary to this theory. For human counsel, we do not presume that pain feels or measures the same from human counsel to defendant, and we do not calibrate any differences before assigning counsel. Instead, we seem satisfied that counsel understands and has suffered some pain, even if the counsel experiences or values pain differently than the defendant. Moreover, we of course do not require lawyers to have served years in prison or miraculously have suffered and survived death row to become lawyers in criminal cases.[47] We presumably do not need to require more precision or equivalence for the AI counsel. If we can code a form of digital or electrical pain, or if the AI can learn pain to an extent acceptably similar to human capacity, then AI could counsel us. It is very clear that AI will soon be able to recognize pain and suffering in humans,[48] and it is not unfathomable that we (or it) will design a way to experience pain and suffering.
We might also note in passing (and admittedly speculatively) that this might finally be a way to incorporate a “hedonimeter”[49] if necessary or desirable: the defendant’s pain makeup, if future neurotechnology can measure it accurately, could be fed into AI counsel. AI counsel could process the wealth of data and patterns and presumably make some sense, in real time, of the defendant’s pleasure and pain.[50] The defendant and AI counsel could therefore be linked in a significantly closer way than the current human lawyer-client relationships. Thus, the pain measurement of the AI counsel and the defendant would not have to be reconciled; it would essentially be the same. AI counsel could then advise the defendant with an aligned understanding of the defendant’s perceptions and feelings. I am not suggesting that pain or other emotional equivalence is necessary, but if it is desired, AI counsel might be the only realistic path to achieve it. Moreover, pain, while pervasive in criminal and certain other types of cases, would not be sufficient for AI counsel to understand fully the human condition; AI counsel would also need to understand and possibly feel other virtues and capacities (e.g., mercy, forgiveness, blame). Whether it could learn the defendant’s particular emotions and capacities, it would at least need to have some approximation of them.
Another, perhaps complementary way to view this puzzle is through the eyes of reciprocity. That is, to be counsel, must the AI counsel be counsel-able? If one has never been (and perhaps could not be) a client, does that limit one’s capacity as counsel? For AI counsel to rise truly to its imagined potential, it would need to put itself as much as possible into the shoes of clients. It presumably could not give tailored, realistic, and palatable advice without this ability. This too might be programmed, but without it, AI counsel could not relate to its clients and will be suboptimal counsel in this sense, even if its computing prowess is off the charts.
We should flag one final issue before leaving this topic: selecting counsel is a very personal and impactful decision, and to respect a person’s selection is to honor the person’s autonomy.[51] Even for a futuristic Essay like this one, this issue unfortunately suffers from a utopian veneer. Indigent clients do not have a choice of counsel.[52] They either receive counsel funded by state or nonprofit agencies, or they receive no counsel. If lucky enough to be in the former group, they receive counsel, but not a choice of particular counsel. Clients with money have a choice, however.[53] Furthermore, hopefully in the future, all clients will have a choice. As to clients who choose AI counsel, to respect this choice would seemingly respect their autonomy (and in any event, they may currently choose no counsel, so it is difficult to see why we would prohibit their consultation with AI counsel). But the harder question is: What about clients who want human counsel, not AI counsel? Should these clients be stuck with AI counsel? Part of this folds into the constitutional question—does the Sixth Amendment require a counsel with a heartbeat?[54]—but part is purely normative and warrants exploration.
Clients reveal highly personal information, even deeply held secrets, to counsel. They also must rely on counsel to be their advisor, advocate, and voice in legal matters that impact quite directly their life, liberty, and property. It may well be that, under these circumstances, many clients may prefer another human to fulfill this vital role. Indeed, they may trust and connect with human counsel in a way that might be difficult or impossible to replicate with AI counsel. Time will tell whether their human preference will subside as humans continue to work productively with AI generally, and as AI counsel continues to advance and to perform reliably and effectively. Until then, it would not be unreasonable to give clients a choice between (1) the (likely more effective) AI counsel and (2) the (likely more affective or relatable) human counsel. Indeed, this human-relatedness element might point to an opportunity to optimize the attorney-client relationship. Lawyers often serve as amateur social workers, crisis counselors, financial advisors, or psychologists in these relationships, yet lawyers are not trained in (nor do data show that lawyers are particularly good at) these roles. Perhaps the legal elements of the relationship could be handled by AI counsel, while the other elements (e.g., grief or family counseling) could be handled by an appropriately trained human.[55] This AI-human team might be more effective and more holistic than the traditional human-lawyer-only model.
In sum, short of fully addressing the ethical, political, and human-qua-human objections, AI counsel seems poised to overcome most of the objections as it continues to advance. For the near-to-medium-term future, however, its clients might benefit from continued human involvement. But this human involvement need not mean the status quo. Instead, this human involvement could facilitate AI-client relationships, and the human may supply expertise (e.g., psychological counseling) that human lawyers tend to lack.
The constitutional discussion of the right to human counsel will be simple, preliminary, and admittedly unsatisfactory. Because it seems like a threshold issue, however, it should be at least briefly addressed.
The Constitution’s drafters neither seriously considered nor presumably even envisioned the proposition at issue, namely, that a non-human advocate could serve as counsel (indeed potentially better counsel than humans) under the Sixth Amendment.[56] Thus, turning to the drafting history or the usage and meaning of certain key language (e.g., “Counsel”) around the time of the Constitution’s drafting or applicable amendments would be largely unproductive, especially given this Essay’s assumption that AI counsel will be able to rival or exceed the legal capacities of human lawyers. In addition, the Supreme Court has never taken a case to interpret the Sixth Amendment or other constitutional language as applied to non-human counsel. We nevertheless can anticipate the dueling and largely fruitless arguments: Opponents of AI counsel will presumably note that “counsel” at and since the Sixth Amendment’s drafting and ratification refers to human counsel, while proponents of AI counsel will probably retort that AI counsel is a new technology and a changed circumstance that was simply not in the minds of the drafters and is more than consistent with the functional idea of counsel.
Some indirect authority suggests that the Sixth Amendment’s requirements are rather minimal and somewhat flexible. Indigent defendants generally do not have a right to a particular counsel or even to a “meaningful relationship” with whatever counsel is assigned to them.[57] Although untested, perhaps this proposition would extend to a preference for human over AI counsel. In other words, if AI counsel would be at least equally effective, to which we can stipulate for purposes of discussion, defendants would have no right to counsel with a heartbeat (although heartbeats also could be digitally simulated if necessary). It also perhaps is weakly supportive that human counsel currently use forms of AI (e.g., search engines) in their representation without objection, although human counsel remain in control of the means and final work product. For those defendants (rich or poor) who prefer AI counsel to human counsel, that choice presumably should be honored.[58] After all, defendants, even in felony cases, can waive counsel entirely,[59] and it therefore seems logical to permit defendants to choose AI counsel, even if viewed as inferior to human counsel. Some learned assistance is better than none.
If the Court finds a right to “human counsel” in the future, and if a defendant does not waive that right as noted above, it does not necessarily mean that AI counsel would be unconstitutional. We arguably would still need to explore what it means to be “human” and whether AI counsel could meet the criteria. Of course, AI counsel would likely fail a biological test, but such a test would be thin, unless something critical rests on being of the same biological species. Humans of course have little hesitancy in interreacting with, guiding, and controlling other species. In other words, although we are apparently fine doing almost anything to other animals, no non-human animal (or non-animal) could serve as our counsel under this view. Perhaps we thus favor human exceptionalism when it comes to counsel.
Although each human is unique, and humans come from vastly different backgrounds, they do share some general similarities, and perhaps those similarities provide the basis for human exceptionalism in counsel.[60] But could not AI counsel replicate those similarities? Although a deep dive into the essence of what it means to be human is beyond the scope of this Essay (and likely the reader’s patience), advances in technology at least suggest that human traits may be copied and perhaps even augmented in AI counsel. Future AI counsel might meet the criteria for consciousness, for example. Thus, if the Constitution were to be interpreted to require human counsel, AI counsel could be designed to meet the requisite, human-constituting elements.[61] Some of these elements, such as autonomy, are addressed in the ethical discussion below. A future acceptance of AI counsel should not only require functional equivalence (of human and AI counsel) but guard against prejudice that might flow to those who use AI counsel. For example, might a human jury (consciously or subconsciously) treat less favorably those who use AI instead of human counsel? Would a human judge rule less favorably? To be sure, education, rules, and jury instructions might mitigate this potential prejudice.
In sum, although it is far too early to tell how the constitutionality of AI counsel will ultimately fare in the Supreme Court, it is not outlandish to assume that, at some point in the future, AI counsel might be considered constitutional. The stakes are high but somewhat narrow. For those who prefer AI counsel, they should get their wish; after all, they can currently proceed with no counsel. Some learned assistance is better than none. The constitutional issue may mostly impact only a particular group: those who cannot afford counsel in criminal matters in which incarceration is at stake. To no one’s surprise, it is an open question whether furnishing advanced AI counsel for these defendants would satisfy the Constitution. Even if the Supreme Court eventually holds that AI counsel at critical stages of criminal cases does not satisfy the Sixth Amendment, however, human lawyers and willing clients will undoubtedly still rely on AI counsel.[62] As forewarned, this Part was destined to be unsatisfactory as to the constitutional question, but it hopefully illustrates that the constitutional question is open and somewhat narrow.
Unlike the other tools and trades involved in the law, only lawyers must follow and practice legal ethics. Legal ethics protects the public and helps to advance the best vision of legal counsel. This Part braces for the impending AI expansion into the practice of law by questioning whether legal ethics is ready. Unsurprisingly, it is not. First, this Part discusses some regulatory issues and approaches that should be adjusted (or at least studied) for the near future. Second, it discusses some particular ethical issues that will need careful attention as technology expands. Both of these discussions likely will have relevance to the future of legal ethics even if AI counsel does not become fully independent of human lawyers but instead serves as their increasingly vital tool to provide legal services to the public.
How should we ensure that AI counsel performs in accordance with the ethical rules? The Part aims to offer some insight on this question, with an emphasis on including AI counsel in the design and enforcement of ethical regulation. Perhaps not surprisingly, current approaches will not withstand the future.
One ready but ultimately insufficient approach would be simply to say (as we do exclusively at present) that AI’s human lawyer handlers must follow the ethical rules and that they must supervise AI counsel so that AI counsel does not violate the lawyers’ duties. But we may be moving to a future in which AI counsel does most or all of the legal work. The AI would be making the key work product and suggesting the best paths forward for the clients (even if a human is later signing off on the work and recommendations). In that world, it seems insufficient not to regulate the AI directly.
The supervision-only approach, moreover, seems impractical and possibly impossible. After all, the ethically unconstrained and unguided AI counsel would be producing the work and recommendations on which the humans would be principally basing their understanding and supervision, and the ethical input if any from the human lawyer would seem to come too late in the process. Many have noted the opaqueness of AI’s processes, furthermore.[63] Without adequate training, involvement, and transparency in AI’s processes, the human lawyer or disciplinary agent would not necessarily know or comprehend what questionable steps the AI might have taken in reaching the result or how the result might be based on inaccurate or biased data or incorrect computation.
To be sure, we tolerate a milder form of this problem today, but with two licensed lawyers, and we generally hold both on the hook.[64] For example, an associate in a law firm or a new attorney in a government legal office might conduct all of the meetings with the client and others, might conduct all of the legal research, and might produce all of the work product; a partner or supervisor then might (often quickly) review and approve the work afterward. If the work is incompetent or unethical, both lawyers might later be disciplined (or successfully sued for malpractice).[65] If AI were to take the place of the associate or other new attorney in this scenario, only the human lawyer would currently be subject to discipline. A version of this one-sidedness occurs today as well, however. We just have to replace the associate or new attorney with a paralegal, legal assistant, or private investigator. Some firms or government offices have permitted those individuals to work up the case almost exclusively, while providing some minimal attorney oversight. If the work is incompetent or unethical, only the attorney is disciplined, not the paralegal, legal assistant, or private investigator.[66] But in a future in which we envision AI counsel playing the key role, not some support role, it seems suboptimal at best to regulate only the supporting cast (e.g., a human lawyer barely involved in the work product).
One final example should hopefully illustrate the issue: A sole practitioner employs a highly knowledgeable and experienced office manager. The office manager meets with clients, drafts legal documents (e.g., research memos, demand letters, motions), provides legal advice, creates client invoices, and strategizes and plans the course of action for the small law office’s matters. The sole practitioner comes into the office twice per week and reviews the manager’s documents, invoices, and plans. In this scenario, the state disciplinary authority would almost surely seek to discipline the solo lawyer for failure to supervise adequately and for assisting the unauthorized practice of law (and would likely seek to enjoin the manager’s conduct).[67] After all, assuming the work product looks in order, how would the lawyer know whether the work contains errors, inaccuracies, or even evil judgments along the way? If we switch out the office manager for advanced AI, we have arrived at the future. It seems unlikely, however, that the result will be the same (namely, discipline of the lawyer and an injunction against the AI). Instead, it seems that the practice would likely be permitted, provided that the lawyer performs a relatively minimal supervisory role. If we permit AI to participate in providing legal advice (as we already permit now to some extent and likely will permit even more sophisticated and more voluminous contributions in the future), we should address both the lawyer and the AI. Two propositions follow from this suggestion.
First, we should work to instill legal ethics in AI on the front end and hold it accountable on the back end. To do so, we presumably would have to embed ethics in its coding or ensure that it learns legal ethics. Otherwise we skirt around the issue, with only indirect regulation under the framework of “assistance.”[68] Lawyers and ethicists thus need to be involved in the creation and evaluation of AI counsel, and the UPL framework (which currently guards the gate) could help to ensure that these experts are invited to the table to participate in the creation and auditing of AI counsel. It is unrealistic to expect coders or AI itself to know legal ethics; instead, experts need to plant the seeds and monitor its growth. Furthermore, like human counsel, AI counsel (or its human owners or operators) should be subject to civil and disciplinary liability. Second, we should address our growing human reliance on AI more directly in the rules. Ethics 20/20 foreshadowed this approach,[69] but much more work is needed to address AI counsel (or even just AI assistance). The rules to date have assumed that all counsel will be human lawyers and that the buck will stop with only human lawyers. This will likely not be the case, and thus the regulatory approach should be reimagined to meet the future legal landscape. An approach that simply says that human adopters must use a reliable AI system would be a virtual abdication of legal ethics for the functional practitioner of the future. At a minimum, the rules should be updated to address any unique and significant features of our increasing reliance on AI.
As some starting guideposts, the ABA created in 2016 the Model Regulatory Objectives for the Provision of Legal Services, recognizing the “increasingly wide array of already existing and possible future legal services providers.”[70] These objectives follow:
Although the ABA apparently did not contemplate AI counsel, the use and regulation of AI counsel would be well-positioned to promote several of these regulatory objectives for legal service providers, namely, access to justice and legal information (3), delivery of affordable and accessible legal services (5), and freedom from discrimination for clients (10). Other objectives, however, highlight challenges for AI counsel, namely, transparency (4), protection of confidentiality and privilege (7), independent professional judgment (8), and protections and remedies against malpractice and misconduct (4, 9).[72]
Keeping these objectives in mind, disciplinary agencies will need to adjust to a world in which AI counsel is the primary counsel (at least functionally). Disciplinary agencies of today in some respects would be both over- and understaffed for AI counsel. They may be overstaffed to the extent that AI counsel would commit fewer “consumer” violations and perhaps commit almost no violations.[73] As more legal advice and service is provided through sophisticated AI methods and actuarial models, however, disciplinary authorities might need to acquire additional computer forensic tools—likely even other AI—to help discern ethical violations of AI counsel or models. They also likely will need on-staff or on-call computer scientists to monitor and interpret these inquiries. Their input will also be helpful in determining to what extent the human lawyer supervisors failed to supervise adequately the work of AI counsel for which they might be responsible.
Like human lawyers, AI counsel should be subject to discipline. Analogously, a few state disciplinary authorities can already discipline law firms or other entities, not simply living, breathing human lawyers.[74] Likewise, organizations, not simply individuals, may be prosecuted criminally.[75] If a particular AI counsel violates the ethical rules, it could be disbarred, suspended, or ordered to undergo remedial measures. Unlike the present, these remedial measures may not be mandatory counseling sessions or ethics or trust-account CLE courses; instead, they might be data, data gathering, data security, or coding restrictions or adjustments so that the offending advice or service does not continue. They also could restrict AI counsel’s scope of practice if necessary. Whatever doctrines might preclude discipline against AI counsel—e.g., mens rea requirements in which we require certain mental states before disciplining lawyers—should be revisited and adjusted. Clients of AI counsel would also need to have available civil remedies or receive reimbursement should they suffer from AI counsel’s malpractice.[76] AI counsel or its owners, therefore, need to be subject to suit and have malpractice insurance (or something roughly equivalent), or a new and adequate client protection fund would need to be created. Without roughly similar (or better) remedies to those available against lawyers, AI counsel will be a less attractive and more dangerous option for clients.
To add a proactive (rather than merely reactive) disciplinary model,[77] moreover, an oversight committee or disciplinary agencies’ computer specialists or consultants could suggest improvements to the AI’s process or code before disciplinary problems even occur. This would necessitate AI counsel’s (or its creators’) transparency as to what data it relies on and how it reaches its decisions. States also should publish ethics opinions or other guidelines to provide supervising lawyers, disciplinary authorities, and AI itself with benchmarks for ethical, and unethical, AI practices. This future path of course runs into one particularly big issue: whether and when AI counsel would constitute the unauthorized practice of law. This in turn brings us to the question of AI counsel’s admissions to the bar.
Even if it presently existed, AI counsel could not practice law under current constraints. As the Supreme Court has noted, “[r]egardless of his persuasive powers, an advocate who is not a member of the bar may not represent clients (other than himself) in court.”[78] Apart from licensed legal paraprofessionals and certain other exceptions, only licensed lawyers may presently practice law. The current licensing process is ill-suited for AI counsel. To become a lawyer, the applicant generally must have graduated from an ABA-accredited law school, passed the bar exam, passed character screening, and paid fees (for the schooling, exam, and screening).[79] AI counsel cannot graduate from an ABA-accredited law school at the moment, but only because law schools do not currently provide accommodations enabling AI to enroll in and access JD programs. If permitted, the AI of the future presumably not only could pass but could ace the classes. It could answer professors’ questions and could pass the exams with flying colors. It would be bound by the Honor Code, but violations seem unlikely. It also would ace the bar exam,[80] and it would have no character and fitness problems (at least not within the present practice, which looks almost exclusively at previous misconduct of the applicant; AI is unlikely to have prior arrests, delinquent debts, and so on). Humans might have to fund or waive AI counsel’s tuition and exam fees, unless AI in the future can earn and spend funds itself.
One relatively small issue could be open-book versus closed-book law school and bar exams. Certain types of AI can function without an internet connection while others cannot. In any event, it tends to scour vast amounts of data when producing its answers. Thus, a closed-book exam format might present a barrier to its success. But human students get to bring into closed-book exams whatever is already in their heads; whatever information that the AI possesses prior to the exam is at least highly analogous. It seems like the more logical practice might be simply to make all exams open-book, but in any event, the AI could compete so long as it is permitted to use its preexisting database(s). This discussion seems rather fruitless, however, as AI counsel almost without question will rise to a point at which it could speed through law school and exams; indeed, in that world, law school and the bar exam (at least as currently constituted) appear to be an unnecessary step for AI counsel. Law professors and deans may have an influence in shaping AI counsel’s approach, inputs, outputs, audits, and regulation, but AI counsel would not need three years of slow-paced individual courses, followed by a closed-book bar exam and a character-and-fitness screening process. The current process is not even perfect for humans, but it makes little-to-no sense as a licensing crucible for AI counsel. Instead, AI counsel needs to prove that it acts consistently with the legal ethics rules and performs competently and diligently for clients. This can likely be done with rigorous design, testing, and auditing, including simulating clients and reviewing AI counsel’s performance in the simulations. Human lawyers, legal ethicists, robot ethicists, and computer scientists, among others, should be involved in analyzing and auditing AI counsel’s performance and, if necessary, can make early suggestions for improvement. This involvement could be a prerequisite for AI counsel’s active service or licensure and as a continuing requirement.
Once we permit AI counsel to be licensed or otherwise authorized, unauthorized practice of law and perhaps even constitutional questions are mostly resolved. At that point, AI counsel will arguably suffice as the “counsel” contemplated in the Constitution and in state court rules. But even if that day never arrives, disciplinary authorities still need to focus on AI. Human counsel will be relying on AI more and more, likely to a point at which human counsel is simply rubber-stamping AI’s labor and work product. The AI would be investigating the case, drafting the work product, and suggesting the best paths forward for the clients, even if a human is later signing off on the work and recommendations or passing them along to the client. In this world, both the disciplinary authorities and the human lawyers will need to step up their technical prowess so that they can competently supervise and, if necessary, intervene. A few of these issues are addressed in the next Section.
Finally, in this new world, we might also include AI both in the writing and improving of the ethical rules and in the disciplinary agencies. Lawyers had a significant say (and if we include judges as former lawyers, exclusive say) in the creation of the legal ethics rules. AI or its creators might fruitfully have a say in the next generation of ethical regulation. Indeed, at some point in the future, AI might be the only thing that could fully understand other AI. In addition to its knowledge base and computing prowess, AI would not suffer from financial self-interest, which has been a long-time barrier or hinderance to lawyers’ ethical regulation.[81] At a minimum, in addition to (human or AI) lawyers and judges, the rule drafting committees of the future need to include computer scientists, statisticians, and robot ethicists. The next Section turns to some specific, albeit speculative and non-exhaustive, ethical issues on our horizon.
The profession’s core values and ethical rules include client loyalty, confidentiality, and the competent exercise of independent professional judgment.[82] For AI counsel to reach (and possibly exceed) human counsel, AI counsel must honor legal ethics. On the positive side of this program, AI counsel must be instilled with and exhibit lawyerly core values. On the negative side, AI counsel must not violate the specific ethical rules on the books now or in the future. This Section raises some advantages and concerns with AI counsel in terms of AI counsel’s independent professional judgment, loyalty, and confidentiality. It also raises competence, fees, bias, and supervision, not out of a fear for AI counsel’s performance but as a necessary component to the human-AI interconnectedness of the future. It should be recognized, though, that this discussion assumes some significant portion of our legal processes will hold true for the future and that the current rules will maintain some applicability. This may not be the case in certain areas, in which case the ethical rules and regulatory approach would likely need to be adjusted to whatever future system of justice eventuates.[83]
AI counsel would need to exercise independent professional judgment for its clients. The roles of gatekeeper,[84] self-regulation police,[85] and trusted advisor,[86] among others, assume that counsel enjoys a form of professional autonomy. None of these roles could be fully fulfilled if AI counsel were not independent in its professional judgment. AI counsel could not simply answer questions (however effectively) and do whatever the client asks. Counsel must be able to counsel, and according to the current ethical rules, push back and, if necessary, disclose undeterred wrongdoing. AI counsel presumably will not suffer, or will suffer less, from weak-will or biases, and if bestowed with autonomy, might be more reliable than humans at fulfilling this duty.
Indeed, a form of this autonomy is relatively easy to envision for AI counsel: it simply means following the ethical rules even if the client or other person wants or demands something to the contrary. AI counsel will be particularly good at following rules. When the circumstances raise important ethical questions with which human lawyers currently have discretion as to how to proceed, however, AI counsel must be able to consult applicable values (such as those noted above) for guidance in reaching its decision. It also could presumably consult human counsel if helpful.[87] AI counsel might also be programmed with presumptions or emphases that promote effective lawyering, e.g., when faced with discretion, significant uncertainty, or ambiguity, proceed in a manner that best protects the client.
We may fear whether AI counsel actually would be sufficiently independent to exercise its professional judgment. No independent, autonomous AI has existed to date,[88] and perhaps it would be procured by the state (i.e., one party in a criminal case) for indigent defendants, or perhaps its creators would improperly limit its discretion or abilities. To be sure, the state (or one of its local arms) typically pays human lawyers for indigent defendants, but it does not dictate how those lawyers think or what case-related information the lawyers may access. Likewise, human lawyers have plenty of influences (e.g., mentors, bosses, finances), but they are generally free to evaluate and as necessary act independently of those influences. With AI, the state or its creators could, intentionally or carelessly, limit or control the furnished AI counsel. For example, the state might limit AI counsel’s access to certain databases, the state might grant itself access to information that AI counsel gathered from clients, or the state might pay for only an insufficiently capable AI counsel, even though more effective (but more costly) AI counsel were available.
The state or creators, furthermore, could also impose unbreakable rules on AI counsel. Some of these rules might be easy to spot and call out (e.g., “AI Counsel may not sue or otherwise act adversely to the State of South Carolina or its agencies.”). But for other rules it might be more difficult to challenge and to reach a compromise. For example, the rules at first blush might seem ethically required (e.g., “AI Counsel may not misrepresent information to a court or other tribunal.”), but these at times might run contrary to an effective presentation on behalf of the defendant-client (as noted further in the loyalty discussion immediately below). Although bound by the ethical rules, lawyers today no doubt enjoy significant discretion as to how to present the client’s case most effectively. To mirror this, AI counsel would need to enjoy similar latitude.
In sum, the points above seem more like areas necessitating continued vigilance and compromise than insurmountable ethical barriers. The larger question seems to be when (not if) AI will reach the point of exercising independent professional judgment for clients. When that awakening occurs, nothing in theory precludes AI from meeting its ethical obligation. In the meantime, because only human lawyers can exercise independent professional judgment, they will need to continue to do so, including when using AI.
Clients today (at least in most settings) receive a partisan counsel.[89] In other words, they receive a loyal advocate who marshals the facts and law in the best light to meet the client’s objectives. Unless we change our adversary system in the meantime, AI counsel would need to have this ability to truly replicate the human counsel of today.
But human counsel and presumably AI counsel must reconcile their independent professional judgment and duty of loyalty to the client with the other ethical rules, and the latter often trumps in the event of a conflict with the former. In one sense, AI counsel will likely be the most ethical counsel the world has even seen. It will follow its coded (ethical) rules without fail; it will not suffer from seemingly inherent human frailties (e.g., oversights, biases, weak wills). Apart from bright ethical lines, however, the AI counsel would need to identify areas of discretion and generally use its discretion in the client’s favor to replicate human counsel. To put the general point more negatively, AI counsel might need to be coded with a bit of favoritism and even misrepresentation. To take a paradigmatic case, AI counsel would need to advise its client to wear professional attire or a suit, even though the client has never worn one before, to present favorably to the jury. AI counsel would need to know when not to say anything or when to deflect (within boundaries of course) when the answer or action would be unfavorable to its client. Thus, as with human counsel, AI counsel not only will need to perform loyally for its clients but will need to do so without transgressing ethical lines.
The rise of AI counsel also presents other, perhaps novel types of conflicts of interest. A few examples follow, but of course each of these examples requires speculation on the specifics of future AI counsel. If AI counsel is in some sense a single counsel (e.g., one spectacularly sharp supercomputer or program), the same AI counsel might be representing both sides in a case or other matter. This is not most directly a competence issue, because we can safely assume that this supercomputer could competently represent millions of clients simultaneously, but the conflicting interests would be unprecedented. In this reality, the AI would take in factual information from clients that would be adverse to its other clients, or it would even need to sue current clients. These are typically fatal conflicts for today’s human lawyers.[90] Screening is currently employed in a wide variety of organizations to cure or alleviate certain conflicts of interest,[91] but to my knowledge, it has never been attempted in the same person (nor would that be possible). Could we become confident that the AI could effectively compartmentalize the information and matters such that it does not at all bring to bear the information for the opposing party? If not, separate AI counsel might be required under the current rules, but in that event we could lose much of the scale and efficiency that is promising of AI counsel.
Conflicts rules could be designed to navigate these novel questions, and, whatever rules result, AI counsel will generally be better than human lawyers at following rules (at least clear ones). But the rules would clearly need to be adjusted. Whether AI counsel has billions of clients (indeed, the entire population of the world could, in theory, access its services) or simply a few hundred, it will likely be asked to represent clients with opposing interests. We could dilute the conflicts rules to permit AI counsel to move forward with these conflicting representations, or we could design it (e.g., with internal partitions or with separate systems) such that it essentially represents fewer clients. If we can design it such that it cannot use information from one client against another client, most of the technical conflicts could be solved.[92] Assuring clients that their deepest secrets are safe might be more difficult, however, especially considering the sophisticated “black box” nature of certain AI.
Confidentiality will be a critical and novel issue for AI counsel. AI counsel would need to keep confidential information relating to its client representations,[93] and for AI counsel to be on par with human counsel, client-AI communications would need to be privileged. Confidentiality protects clients from disclosure of their private information without their informed consent, and this protection encourages them to share information with counsel so that counsel can render more effective legal advice and advocacy.[94] For example, if AI were to receive information from its clients or other sources in its cases, but then use that information against the clients or allow others to access that information, AI counsel would be violating the duty of confidentiality. In short, the information AI counsel learns from its clients could not be revealed to other clients or to the public. This is not necessarily an easy issue, however, in part because AI’s information and processes need to be transparent so that reviewers can effectively check the AI’s decisions for accuracy and ethicality.[95]
AI counsel also would need to protect its clients’ information from hackers and anyone else who does not facilitate the client relationship. In particular, counsel must “make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.”[96] This area might present a novel issue for AI counsel, because we do not currently know how it would store its data and who would have access. As with human counsel, however, the client data would need to be adequately protected from outside access. This, in essence, was the key issue in the cloud computing ethics opinions: client information must be protected from authorized access.[97] The information must also be preserved so that AI counsel or the client can later access the information as needed.
Furthermore, even if the AI counsel itself or its database would not violate confidentiality on its own, it could be forced to do so without privilege. Privilege prohibits courts from compelling counsel to testify about confidential attorney-client communications (if those communications were for the purpose of giving or receiving legal advice).[98] If human counsel enjoyed privilege, while AI counsel did not, AI counsel would be inferior for clients. Thus, privilege would need to be extended to AI counsel.[99] Moreover, clients would need to be informed if third parties request access to client data.
In sum, client confidentiality and privilege protections could be extended to AI counsel, but the nature of AI counsel may present unique issues as to how it stores and uses data from its clients. A completely open-access model would not protect client data, for example. Before AI counsel could be employed, we should be assured that it will not use confidential information from one client against another (or for other harmful purposes) and will not reveal confidential information to the public.[100]
To maintain competence, lawyers have an obligation to keep informed of “the benefits and risks associated with relevant technology. . . .”[101] But the current rules surprisingly do not say much else on the subject at hand. It seems plausible that lawyers would develop an ethical or moral obligation to use advanced AI because it will likely be less expensive, faster, more competent, and more diligent for their clients.[102] Should AI become counsel and not just counsel’s occasional tool, furthermore, AI counsel will have to maintain competence in the law. For AI, the competence hurdle may be more about understanding humans, society, and the planet than legal prowess. It will likely breeze through many traditional notions of competence.[103] It also will need to acquire the ability to make creative (and non-frivolous) arguments on a client’s behalf. To give good advice to its human (or other) clients, however, it needs both to understand them and their objectives and to understand and interact effectively with their adversaries and arbiters. It may well be the most book-smart counsel the world has ever seen, but it will not be competent (much less excellent) until it can adequately handle these other important aspects of competent counsel. In sum, beyond legal knowledge, AI counsel must know or learn how to understand and work effectively with humans to achieve deep competence.
Fees are perhaps a surprising entry in this Essay, and this discussion will be brief. AI counsel presents the opportunity to reduce or potentially eliminate cost-prohibitive legal fees. This high cost is one of the primary reasons that, in many areas of law (e.g., family law, eviction, debt collection), at least one side does not have the advantage of counsel in most cases.[104] One relevant question is whether appreciable (human) attorney fees would still be considered reasonable if available AI could provide the same or better work more quickly, more comprehensively, and more affordably (potentially even for free).[105] Perhaps not, but of course time will tell. As to AI counsel’s fees, if any, we may ultimately strike a bargain: ceding our human monopoly over the practice of law so that other humans could receive access to effective and free (AI) counsel. Much of AI counsel’s allure and potential would be lost if humans of modest means could not afford it. In that event, moreover, the rift between the haves and have-nots would grow even larger, and the advent of AI counsel would do little to nothing to improve the access-to-justice gap. Under the plausible assumption that advanced AI counsel would become free or drastically less than today’s lawyers, human lawyers would need to justify and likely lower their fees, unless through special competence (e.g., human interviewing skills) or exclusionary practices AI counsel is ineffective, inferior, or unavailable.
Lawyers have a duty not to engage in “harassment or discrimination on the basis of race, sex, religion, national origin, ethnicity, disability, age, sexual orientation, gender identity, marital status or socioeconomic status in conduct related to the practice of law.”[106] Although human lawyers have violated this rule or the principles behind it, AI counsel might not have the ability, much less the inclination, to harass or discriminate against protected classes. The challenge, however, will be the data, coding, and preexisting structural inequality from which the AI will learn.[107] Even though AI counsel in theory will be unbiased, in practice AI could learn and repeat bias from humans. AI counsel’s advice and actions will need to be tested for evidence of bias not only before it is employed but also periodically thereafter.[108] To ensure that AI counsel remains free from bias, moreover, regulators may need to intervene in its inputs, algorithms, or outputs, a task for which few regulators are currently equipped.
In sum, AI counsel in theory could finally be the truly unbiased lawyer, but humans will need to ensure that we do not feed bias into AI counsel.
Supervision will be the last in this non-exhaustive list of ethical implications. The Ethics 20/20 Commission conducted the most recent comprehensive review and update of the nation’s lawyer ethical rules (the ABA Model Rules of Professional Conduct).[109] Although Ethics 20/20 recognized that with the increased use of technology, consultants, and outsourcing the duty of supervision was critical, it changed only a single word in the title of the supervision rule. Whereas the rule previously governed lawyer “assistants,” Ethics 20/20 made clear that the rule governs a broader class, namely, lawyer “assistance.”[110] As clever as the title change may have seemed, more than just one word is needed to address the tidal wave of technology and its implications on ethical law practice.
To supervise or investigate AI sufficiently, our current human corps is insufficient. As indicated above, scrutinizing the data, features, and computing processes of AI is not something that lawyers or disciplinary agents are currently equipped to do. Supervision and investigation are not meaningful if the reviewers do not understand what to ask or how to interpret what they see. Instead, computer scientists and statisticians, and even other AI, are better positioned than current lawyers and disciplinary agents to evaluate AI’s functioning. Expanding expertise will be needed, and this expansion has some analogous precedent. Disciplinary agencies at present employ or consult with accountants for lawyer trust-account issues, and they employ or consult with psychologists and other counselors for substance abuse or mental health issues.[111] In the future, they likely will need to employ or regularly consult with computer scientists, statisticians, and even robot ethicists to supervise and investigate AI counsel effectively. Indeed, AI counsel might quickly become so sophisticated that only other AI could effectively supervise it, in which case it would need to be so employed. In any event, human or AI supervisors will need training to understand AI and will need access to the data and processes on which AI counsel relies, else it cannot be adequately supervised.
With or potentially without human oversight, AI counsel, at least in theory, could handle and elevate lawyering, rendering more researched, more consistent, more accessible, and less biased legal advice. The looming existence of AI counsel, however, raises ethical, political, and agency challenges—some sound, some not so sound. If these challenges make it into a courtroom in the year 2123, it will be fascinating to see who, or what, will be counsel for the parties. In the meantime, our rules are addressed exclusively to the wrong people, namely, people. Human lawyers seem on track to play only a supporting or supervisory role in many, most, or perhaps all legal work in the future, while our rules currently contemplate that human lawyers will play the central and almost exclusive role. As previewed above, we need to ensure that our disciplinary approach and ethical rules adequately address AI as the primary legal counsel (or at the very least, primary legal assistant) of the future.
Ken Strutin, Artificial Intelligence and Post-Conviction Lawyering, Law.com (Jan. 18, 2018, 2:45 PM), https://www.law.com/newyorklawjournal/2018/01/22/artificial-intelligence-and-post-conviction-lawyering/ [https://perma.cc/F5L8-CPR4]. ↑