Part 1

If we are increasingly willing to consign our fortunes to the advice of artificially intelligent financial advisers and place our mortal survival in the hands of robo surgeons and driverless cars, when should we entrust our legal rights to robo-lawyers?

This four-part series explores whether we want robo-lawyers and when they are likely to rise, what it will takes for a robo-lawyer to understand a human client and its legal issues, subtle legal skills we will need to develop in a robo-lawyer, and the deeper changes society will face before embracing robo-lawyers. Part 4 of this series also posits five questions you should ask your robo-lawyer before abandoning corporeal counsel.

The growth of artificial intelligence (AI), machine learning, speech recognition, big data, blockchain and other related technologies are making possible many new robo-professionals, from tax preparers to financial advisers, medical diagnosticians, surgeons and autonomous drivers. The pace of change is accelerating, such that there is now discussion about AI studying law, leading at least some to question whether there will be an Armageddon for lawyers as well.

Do We Even Want Robo-Lawyers?

As access to legal service providers becomes quicker, easier and cheaper, better information should lead to better decision-making, which in the aggregate should then result in more predictable outcomes, less contentious work such as litigation, and greater client profits. Cheaper access to legal services also improves opportunities for those who cannot afford it or who otherwise do not easily appreciate the value of legal advice. If we can accept cars driving us, and robots operating on us, we will eventually come to accept robots practicing law for us.

The fullness of time probably renders this question irrelevant, or at least less interesting than the questions "how will we humans know when our time has come?" and "how should we assess the value of a robo-lawyer's legal advice until the change-over is complete?"

How Long Do We Have?

Speech recognition has been around for more than 60 years. Westlaw and Lexis-Nexis have been identifying useful case law for around 45 years, roughly half of those years using natural language searches. One of the more advanced AI approaching the bar right now goes by the name ROSS, powered by IBM Watson. ROSS claims to take in questions in natural language and output not only relevant case law and statutes, but also answers to some basic legal questions. ROSS began in the area of bankruptcy law, but its competencies will grow quickly. ROSS has a budding number of kindred e-spirits, which include Peter who verifies and organizes documents and document signings, an AI from Luminance operating in the mergers and acquisition space, LONALD, and nearly everyone's favorite DoNotPay, a bot for contesting parking tickets. These AI operate based on technology marketed as neural networks and deep learning, which 10 years ago beat human chess experts (Deep Blue by IBM), five years ago won at the TV show "Jeopardy" (Watson by IBM), and recently beat human experts at the game of Go (DeepMind by AlphaGo) and poker (Libratus).

In the next few years, AI will continue to accelerate downsizing in many areas of legal services, including those related to processing and filing documents, maintaining client bills, conducting and processing document discovery, interfacing with third-party service providers, and general law practice management. There is already a reduction in the number of billable hours lawyers and their assistants pass through to clients in relation to quasi-legal administrative matters and even basic legal research.

Present day software can ask a client questions and use the input to choose among pre-drafted contracts, wills and other agreements, and then modify them in simple, predictable ways. Sufficiently powerful computers like ROSS are beginning to review all prior legal decisions on a narrow legal issue and provide suggested answers, perhaps appended by a level of confidence and even a list of outlying data that cannot be harmonized with the advice. At the edge of what is conceivable with existing technology, we may start to see software that drafts very rough version of substantive legal correspondence, followed by portions of briefs, opinions and judicial decisions.

AI and big data will continue to improve the quality of predictions and client counselling in increasingly complex matters, so that cost and delays are reduced, or clients may use the law toward more strategic outcomes. Each of these improvements are likely to begin in stable or formulaic areas of law like traffic and parking violations, insurance claims, real estate (especially conveyancing), contracts and some areas of family law like drafting basic wills. Improvements will then move toward such things as jury selection and first drafts of trademark and later patent registration filings.

We can assume that the current trends of improved speech recognition, natural language parsing, and deep learning will continue, perhaps as Bill Gates suggests overestimating what is likely in the next few years, but underestimating what is likely in the next 10.

Part 2

To drive somewhere, all you have to tell an automated car is your destination, the rest is technology, albeit complex amazing and sometimes patentable technology. Given any two physical coordinates, the same technical solution should be applicable to any human passenger. However, when legal issues are involved, a human client may not be able to articulate a preferred outcome and also usually needs advice on what results are even possible. A lawyer thus needs a lot more information, and an AI needs a lot more complex, amazing and sometimes patentable technology.

Can artificial intelligence understand its client? Clients often fail to appreciate the relevance of information, and even when they do, they often choose, consciously or not, not to provide it. Some of this information lawyers can get from evaluating the client, including gestures, posture, eye movements, pauses, and even tears. An uncomfortable client can often signal weaknesses where more legal attention or a work-around is needed. Similar observations about opponents, witnesses, opposing counsel, judges and jurors can expose areas for quick settlements or further pressure. Sometimes the subtleties of word choice and intonation provide this kind of information, and sometimes what isn't said, or even who is brought out to say it, speaks volumes.

Can a Client Communicate with an Artificially Intelligent Lawyer?

On the flip side of reading human communication, we humans have learned to input data into computer systems more efficiently, like typing language and using clearer and simpler speech. But simply creating robo-lawyers does very little to train clients how to communicate more clearly and precisely, in the way that a robo-lawyer may require, before providing even responsive legal advice. It seems unlikely that lawyers will draft simpler briefs, judge's opinions, and legislatures laws, so that they are understood more easily by AI, with the goal of making the AI more productive in the future. It is even less likely that we will learn to simplify our nonverbal communication so that it can be parsed more easily by robo-lawyers. Our AI has a long way to go in learning to read and understand human communication, conduct and choices.

Can a robo-lawyer identify the relevant legal facts? Trained humans are particularly good at searching for facts that might expose strengths or weaknesses in a case, or change the nature of the case completely. One might argue that AI will be better able to combine facts and law to reach better or more predictable outcomes by running an inconceivable number of lines through data points or extrapolating from thousands of other final decisions. This process is unavailable to humans except over great almost-evolutionary periods of time.

There is a continuum between cases where facts place the case into a particular box, and cases where the facts are so significant to the case that it is years before anyone can guess not just the outcome, but which legal rules should be applied. At the near end of this continuum, current robo-lawyers may perform better than humans where the bits or qubits of factual and legal data can be divided into logical categories and where the questions have clear right or wrong answers. But what about the gray areas where the rest of legal practice resides?

We would expect litigants to dispute relevant facts and even which facts are more relevant to a legal conclusion. We know that juries are occasionally hung. Lawyers on the same side of a matter often dispute which facts or legal issues are more important, and even the same lawyer may justifiably adjust an opinion as a case matures. Even after the case is fully argued by the parties asserting their strongest adversarial positions, the importance of the facts and the law often remain unclear.

Judges sometimes report that they start off drafting a decision in one direction only to realize that it won't "write" that way, and they change the reasoning or even the judgment. Judges, panels of judges on the same court, and whole courts can disagree with one another. In the United States, even the Supreme Court is often split as to the "right" outcome by dissents, or the right way to get to the same outcome by concurring opinions, and which facts are relevant to the legal reasoning. And of course, the legislature can always disagree and change the law to accommodate political interpretations of public policy.

Providing legal advice often requires not just perceiving and understanding the relevant and nuanced facts, but also placing them in useful contexts alongside relevant precedents, as well as evaluating the probable outcomes given the various human-made and human-controlled legal and political institutions.

One day we or our AI will be able to program the entirety of the legal system—lawyer, jury and judge—and amalgamate all levels of law-making so that everyone can consult software to obtain predictable answers to the questions what is legal, who is at fault, or how to obtain the intended outcome. At that point, we may resolve the ambiguity and unpredictability created by the difference in understanding between juries, lawyers, judges, appellate judges and various local, national and international legislatures and policy making bodies. When AI ultimately bypasses the human inability to fully comprehend, systematize and articulate relevant legal facts in choosing a course of action—akin to autonomous cars bypassing the need to share responsibility with human drivers—obtaining just legal result might even become technically easier than having a mixed system of robot and human made law.

Part 3

There is a significant difference between providing an accurate or even a thorough legal answer, and providing a legal opinion that is strategic, that is likely to be trusted and followed by the client, that does not create additional liability for the client and that zealously protects the client's legal rights.

Advice vs. Strategy

Consider the difference between advising whether a company can legally claim a certain tax deduction, and providing a legal opinion about how to structure a multinational company and where to incorporate it so as to maximize R&D tax credits while minimizing long-term tax liability given the company's likely investors, manufacturing needs, customer base and exposure to certain types of litigation. In my field of work, AI might soon be able to draft and file a trademark application, albeit with limited regard to maximizing potential legal rights or avoiding longer-term risks and liabilities. But AI cannot yet devise a strategy to minimize and spread costs and risks over time, while maximizing potential rights, and positioning the rights globally with respect to likely competitors.

While AI may be programmed to recognize some of its own limitations in order to advise clients when they should seek more capable legal representation, as a human lawyer is expected and in many instances required to do, it will take considerably more progress before AI appreciates the difference between when a client needs an answer and when it needs legal strategy.

The contrast between these skills is even more stark where the client chooses not to follow the attorney's advice. In these instances, legal opinions can create significant risk and liability for a client. For this reason, a good legal opinion pays significant attention to what it does not address or conclude, and the way in which it carefully words what it does conclude—even in circumstances where the client would not appreciate the additional care. We will need to see AI provide this level of strategic advice before we grow comfortable with a transition to solo practicing robo-lawyer.

Trust and Influence

The current process of finding and selecting a human lawyer invests authority and trust in the lawyer's advice so that the client is more likely to accept and follow it. The human lawyer then spends time growing that relationship of trust so that the advice can be more effective or at least so that the client continues to pay and does not seek alternate legal representation. However, robo-lawyers could be programmed to provide advice that aids a client in known illegal activity or that furthers a criminal enterprise—things human lawyers are not permitted to do. Software could be programmed to identify risky or illegal conduct and suggest alternatives. But, robo-lawyers will need to develop the moral and experiential authority needed to push back effectively, encouraging and explaining alternatives until the client alters course.

When a client refuses to alter course, a human lawyer has a number of options designed to protect the client, preserve the integrity of the legal system, and balance societal and ethical norms. Through all of this, lawyers are required to maintain client confidences. Leaving aside the need to ensure against hacking, we will need to teach AI to act to preserve these client's confidences even without the fear of being sued or disbarred by choosing incorrectly.

For example, the attorney-client privilege encourages clients to consult with lawyers in confidence in circumstances where ignorance is less risky or less costly than taking remedial action; think of cases involving corporate fraud, products liability and willful patent infringement. A human lawyer can investigate, marshal technical and human resources from third parties, and report legal advice to a client in a way that can encourage appropriate action, without providing underlying facts that might become discoverable by others. While the public may have a strong interest in knowing the facts or the subject of the attorney-client communication, we generally consider the social benefits from encouraging legal representation and investigation to lead to better outcomes for society. A robo-lawyer will need programming to carry out such investigations, walking the fine line between what information to disclose and what information to withhold from its client.

Zealous Advocacy

To take a particularly contentious example of zealous advocacy, a court may grant a plaintiff's motion seeking to pierce a defendant's attorney-client privilege and discover the substance of its communications with its lawyer. If revealed, the damage to the defendant may be irreparable. A human lawyer as an officer of the court may be required to use his reputation, experience and standing to buy the client time to appeal the order piercing privilege—as appellate courts not only can, but sometimes do change or limit these rulings whether through a different interpretation of the facts or the law, or both, or even by making a new law. Sometimes the human lawyer has to do this by weighing the risk of judicial sanctions to the client and the lawyer and by advocating right up to the line.

At present the attorney-client privilege does not even protect a client's query to a software tool without the human lawyer running the search. Assuming this changes as legal systems evolve to incorporate AI as providers of legal services, what happens when a court orders piercing of this privilege? We will need to conceive how an AI can appreciate and weigh the threat of sanctions to its client, as well as determine when compliance with a court order is necessary or just strongly encouraged.

Part 4

Whether good or bad, the law is used by some to obtain or retain advantages over others, like education, money and politics. Over short and medium time frames, access to AI—and access to better AI—will likely skew toward those who can afford to supplement quality human legal advice for their separate advantage. So, until AI can make quality legal services equally available to all, we human lawyers have a continuing social obligation to supplement pro bono efforts with access to legal technologies used by our paying clients. Remember "the future is already here—it's just not evenly distributed." (William Gibson, "Neuromancer," ACE 1984.)

Indifferent to motivations, our individual legal needs result in judicial decisions and laws that feed a necessary but obscured dialectic process, akin to the work of Hari Seldon. (Isaac J. Asimov, "Foundation," Gnome Press 1951.) We want our laws to be, or at least be perceived as final and immutable. But, we also want our laws to evolve so that they fit better with our ever-growing understanding of our morals, and our social, technical and business needs. By working meticulously and conscientiously to resolve our separate legal needs, by making sense of and then applying what appears to others as legal minutiae, lawyers and judges help to promote the process, corral affluent or disruptive outliers, and at the same time obscure the forest by focusing on the seeds.

Robo-lawyers can collect statistics and will soon start recommending changes to our laws. The European Union is conducting a project in Germany called IMPACT to automate processes for developing legal policy. AI will soon start innovating within the law, testing limits and loopholes, and playing a more significant part in the evolution legal theory. AI can amalgamate and parse data. AI can perform sensitivity analyses on conclusions. But knowing which novel legal arguments are likely to work from within the system, without changing it so much that it snaps, suggests a long road ahead for our robo-lawyer. Perhaps more importantly, even when AI become capable of creating and revising our laws, there is a meaningful difference between near-immutable law pronounced by a digital black box and near-immutable law pronounced by our most esteemed legal minds.

Our AI is not yet a part of the same legal context that we are tasking it to interpret. It does not yet contemplate the possibility that the law will be applied to it, to its friends, parents and children. It does not yet understand the meaning of the historical, political and factual context, the gut feeling and the experience of the persons or committees that drafted laws and legal opinions, or the nuance that is needed to deconstruct the meaning and rebuild it over and over again into coherent, persuasive—though sometimes not entirely logical—accepted legal theories. A similar technical gulf also needs to be overcome when these new legal theories are applied in the service of individual human clients. A robo-lawyer experimenting with evolving law needs to be able to determine which new legal arguments will align the specific needs, preferences and likely reactions of its particular client.

In some extreme cases, our human legislators become too stymied by political forces to make necessary social change, and these changes have to be made by judicial intervention—something we see in civil rights cases for example. Will may be able to develop AI that can make these bold decisions for us. Given the right technological progress, we may develop AI that packages these decisions as predetermined, almost inevitable legal conclusions—for the sake of the human psyche. And, in the final development step for our AI, when the legal Armageddon itself gives way to further technological improvement that fees our human soul, by repackaging the decision in a way designed to activate the same rush of social pride that comes when present-day human judicial intervention confirms a bold, massive and unalterable positive sea change in humanity.

A Social Litmus Test

Advancements in relevant technology will continue to see humans and AI work hand-in-circuit, at the same time robo-lawyers continue to displace humans. But, we are unlikely to see a watershed change until at least four things happen: by some as yet undefined test in nearly all areas of law, AI becomes predictably more accurate in providing not just correct legal answers but also more valuable strategic legal opinions than the relevant human counterpart (perhaps a derivative of the Turing test); the cost-benefit analysis of human advice-seekers tips from accepting to trusting and preferring a zealous software-derived advocate; our broader legal institutions shift toward a software-first model (think robo-legislature and perhaps robo-judge); and we either abandon our belief that our common law legal system should evolve while preserving a sense of permanence, or AI learns to tweak our laws and educate humans in a manner that we generally agree is better than, or at least indistinguishable from human tinkering.

Until the legal Armageddon, if your lawyer is billing you for work product that could be done by an AI ask for a discount, or better yet, get a better human lawyer. And when a robo-lawyer starts pitching you for business, use the following questions to test whether it is time to switch.

  • Will it understand my legal needs, even when I don't?
  • Will it go beyond answering my legal question and provide me a strategic legal opinion?
  • Will I trust and follow its advice?
  • Will it be my zealous advocate?
  • Will it balance evolution of the law with the rule of law, without telling me?

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.