“AI vs. Lawyers” – Interesting Result, Bad Headline

The recent clickbait headline “AI vs. Lawyers: The Ultimate Showdown” might lead some to believe that an artificial intelligence system and a lawyer were dueling adversaries or parties on opposite sides of a legal dispute (notwithstanding that an “intelligent” machine has not, as far as US jurisprudence is concerned, been recognized as having machine rights or standing in state or federal courts).

Follow the link, however, and you end up at LawGeex’s report titled “Comparing the Performance of Artificial Intelligence to Human Lawyers in the Review of Standard Business Contracts.” The 37-page report details a straightforward, but still impressive, comparison of the accuracy of machine learning models and lawyers in the course of performing a common legal task.

Specifically, LawGeex set out to consider, in what they call a “landmark” study, whether an AI-based model or skilled lawyers are better at issue spotting while reviewing Non-Disclosure Agreements (NDAs).

Issue spotting is a task that paralegals, associate attorneys, and partners at law firms and corporate legal departments regularly perform. It’s a skill learned early in one’s legal career and involves applying knowledge of legal concepts and issues to identify, in textual materials such as contract documents or court opinions, specific and relevant facts, reasoning, conclusions, and applicable laws or legal principles of concern. Issue spotting in the context of contract review may simply involve locating a provision of interest, such as a definition of “confidentiality” or an arbitration requirement in the document.

Legal tech tool using machine learning algorithms have proliferated in the last couple of years. Many involve combinations of AI technologies and typically required processing thousands of documents (often “labeled” by category or type of document) to create a model that “learns” what to look for in the next document that it processes. In the LawGeex’s study, for example, its model was trained on thousands of NDA documents. Following training, it processed five new NDAs selected by a team of advisors while 20 experienced contract attorneys were given the same five documents and four hours to review.

The results were unsurprising: LawGeex’s trained model was able to spot provisions, from a pre-determined set of 30 provisions, at a reported accuracy of 94% compared to an average of 85% for the lawyers (the highest-performing lawyer, LawGeex noted, had an accuracy of 94%, equaling the software).

Notwithstanding the AI vs. lawyers headline, LawGeek’s test results raise the question of whether the task of legal issue spotting in NDA documents has been effectively automated (assuming a mid-nineties accuracy is acceptable). And do machine learning advances like these generally portend other common tasks lawyers perform someday being performed by intelligent machines?

Maybe. But no matter how sophisticated AI tech becomes, algorithms will still require human input. And algorithms are a long way from being able to handle a client’s sometimes complex objectives, unexpected tactics opposing lawyers might deploy in adversarial situations, common sense, and other inputs that factor into a lawyer’s context-based legal reasoning and analysis duties. No AI tech is currently able to handle all that. Not yet anyway.

When It’s Your Data But Another’s Stack, Who Owns The Trained AI Model?

Cloud-based machine learning algorithms, made available as a service, have opened up the world of artificial intelligence to companies without the resources to organically develop their own AI models. Tech companies that provide these services promise to help companies extract insights from the company’s unique customer, employee, product, business process, and other data, and to use those insights to improve decisions, recommendations, and predictions without the company having an army of data scientists and full stack developers. Simply open an account, provide data to the service’s algorithms, train and test an algorithm, and then incorporate the final model into the company’s toolbox.

While it seems reasonable to assume a company owns a model it develops with its own data–even one based on an algorithm residing on another’s platform–the practice across the industry is not universal. Why this matters is simple: a company’s model (characterized in part by model parameters, network architecture, and architecture-specific hyperparameters associated with the model) may provide the company with an advantage over competitors. For instance, the company may have unique and proprietary data that its competitors do not have. If a company wants to extract the most value from its data, it should take steps to not only protect its valuable data, but also the models created based on that data.

How does a company know if it has not given away any rights to its own data uploaded to another’s cloud server, and that it owns the models it created based on its data? Conversely, how can a company confirm the cloud-based machine learning service has not reserved any rights to the model and data for its own use? The answer, of course, is likely embedded in multiple terms of service, privacy, and user license agreements that apply to the use of the service. If important provisions are missing, vague, or otherwise unfavorable, a company may want to look at alternative cloud-based platforms.

Consider the following example. Suppose a company wants to develop an AI model to improve an internal production process, one the company has enhanced over the years and that gives it a competitive advantage over others. Maybe its unique data set derives from a trade secret process or reflects expertise that its competitors could not easily replicate. With data in hand, the company enters into an agreement with a cloud-based machine learning service, uploads its data, and builds a unique model from the service’s many AI technologies, such as natural language processing (NLP), computer vision classifiers, and supervised learning tools. Once the best algorithms are selected, the data is used to train them and a model is created. The model can then be used in the company’s operations to improve efficiency and cut costs.

Now let us assume the cloud service provider’s terms of service (TOS) states something like the following hypothetical:

“This agreement does not impliedly or otherwise grant either party any rights in or to the other’s content, or in or to any of the other’s trade secret or rights under intellectual property laws. The parties acknowledge and agree that Company owns all of its existing and future intellectual property and other rights in and concerning its data, the applications or models Company creates using the services, and Company’s project information provided as part of using the service, and Service owns all of its existing and future intellectual property and other rights in and to the services and software downloaded by Company to access the services. Service will not access nor use Company’s data, except as necessary to provide the services to Company.”

These terms would appear to generally protect certain of the company’s rights and interest in its data and any models created using the company’s data, and further the terms indicate the machine learning service will not use the company’s data and the model trained using the data, except to provide the service. That last part–the exception–needs careful attention, because how a company defines the services it performs can be stated broadly.

Now consider the following additional hypothetical TOS:

“Company acknowledges that Service may access Company’s data submitted to the service for the purpose of developing and improving the service, and any other of Service’s current, future, similar, or related services, and Company agrees to grant Service, its licensees, affiliates, assigns, and agents an irrevocable, perpetual right and permission to use Company’s data, because without those rights and permission Service cannot provide or offer the services to Company.”

The company may not be comfortable agreeing to those terms, unless the terms are superseded with other, more favorable terms in another applicable agreement related to using the cloud-based service.

So while AI may be “the new electricity” powering large portions of the tech sector today, data is an important commodity all on its own, and so are the models behind an AI company’s products. So don’t forget to review the fine print before uploading company data to a cloud-based machine learning service.

Legal Tech, Artificial Intelligence, and the Practice of Law in 2018

Due in part to a better understanding of available artificial intelligence legal tech tools, more lawyers will adopt and use AI technologies in 2018 than ever before. Better awareness will also drive creation and marketing of specialized legal practice areas within law firms focused on AI, more lawyers with AI expertise, new business opportunities across multiple practice groups, and the possibly of another round of Associate salary increases as the demand for AI talent both in-house and at law firms escalates in response to the continued expansion of AI in key industries.

The legal services industry is poised to adopt AI technologies at the highest level seen to date. But that doesn’t mean lawyers are currently unfamiliar with AI. In fact, AI technologies are widely used by legal practitioners, such as tech that power case law searches (websites services in which a user’s natural language search query is processed by a machine learning algorithm, and displays a ranked and sorted list of relevant cases), and that are used in electronic discovery of documents (predictive analytics software that finds and tags relevant electronic documents for production during a lawsuit based on a taxonomy of keywords and phrases agreed upon by the parties).

Newer AI-based software solutions, however, from companies like Kira and Ross, among dozens of others now available, may improve the legal services industry’s understanding of AI. These solutions offer increased efficiency, improved client service, and reduced operating costs. Efficiency, measured in terms of the time it takes to respond to client questions and the amount of billable hours expended, can translate into reduced operating costs for in-house counsel, law firm lawyers, judges, and their staffs, which is sure to get attention. AI-powered contract review software, for example, can take an agreement provided by opposing counsel and nearly instantaneously spot problems, a process that used to take an Associate or Partner a half-hour or more to accomplish, depending on the contract’s complexity. In-house counsel are wary of paying biglaw hourly rates for such mundane review work, so software that can perform some of the work seems like a perfect solution. The law firms and their lawyers that become comfortable using the latest AI-powered legal tech will be able to boast of being cutting edge and client-focused.

Lawyers and law firms with AI expertise are beginning to market AI capabilities on their websites to retain existing clients and capture new business, and this should increase in 2018. Firms are focusing efforts on industry segments most active in AI, such as tech, financial services (banks and financial technology companies or “fintech”), computer infrastructure (cloud services and chip makers), and other peripheral sectors, like those that make computer vision sensors and other devices for autonomous vehicles, robots, and consumer products, to name a few. Those same law firms are also looking at opportunities within the ever-expanding software as a service industry, which provides solutions for leveraging information from a company’s own data, such as human resources data, process data, quality assurance data, etc. Law practitioners who understand how these industries are using AI technologies, and AI’s limitations and potential biases, will have an edge when it comes to business development in the above-mentioned industry segments.

The impacts of AI on the legal industry in 2018 may also be reflected in law firm headcounts and salaries. Some reports suggest that the spread of AI legal tech could lead to a decrease in lawyer ranks, though most agree this will happen slowly and over several years.

At the same time, however, the increased attention directed at AI technologies by law firm lawyers and in-house counsel in 2018 may put pressure on law firms to adjust upward Associate salaries, like many did during the dot-com era when demand for new and mid-level lawyers equipped to handle cash-infused Silicon Valley startups’ IPO, intellectual property, and contract issues skyrocketed. A possible Associate salary spike in 2018 may also be a consequence of, and fueled by, huge salaries reportedly being paid in the tech sector, where big tech companies spent billions in 2016 and 2017 acquiring AI start-ups to add talent to their rosters. A recent report suggests annual salary and other incentives in the range of $350,000 to $500,000 being paid for newly-minted PhDs and to those with just a few years of AI experience. At those levels, recent college graduates contemplating law school and a future in the legal profession might opt instead to head to graduate school for a Masters or PhD in an AI field.

The AI Summit New York City: Takeaways For the Legal Profession

This week, business, technology, and academic thought leaders in Artificial Intelligence are gathered at The AI Summit in New York City, one of the premier international conferences offered for AI professionals. Below, I consider two of the three takeaways from Summit Day 1, published yesterday by AI Business, from the perspective of lawyers looking for opportunities in the burgeoning AI market.

“1. The tech landscape is changing fast – with big implications for businesses”

If a year from now your law practice has not fielded at least one query from a client about AI technologies, you are probably going out of your way to avoid the subject. It is almost universally accepted that AI technologies in one form or another will impact nearly every industry. Based on recently-published salary data, the industries most active in AI are tech (think Facebook, Amazon, Alphabet, Microsoft, Netflix, and many others), financial services (banks and financial technology companies or “fintech”), and computer infrastructure (Amazon, Nvidia, Intel, IBM, and many others; in areas such as chips for growing computational speed and throughput, and cloud computing for big data storage needs).

Of course, other industries are also seeing plenty of AI development. The automotive industry, for example, has already begun adopting machine learning, computer vision, and other AI technologies for autonomous vehicles. The robotics and chatbot industries have seen great strides lately, both in terms of humanoid robotic development, and consumer-machine interaction products such as stationary and mobile digital assistants (e.g., personal robotic assistants, as well as utility devices like autonomous vacuums). And of course the software as a service industry, which leverages information from a company’s own data, such as human resources data, process data, healthcare data, etc., seems to offers new software solutions to improve efficiencies every day.

All of this will translate into consumer adoption of specific AI technologies, which is reported to already be at 10% and growing. The fast pace of technology development and adoption may translate into new business opportunities for lawyers, especially for those who invest time to learning about AI technologies. After all, as in any area of law, understanding the challenges facing clients is essential for developing appropriate legal strategies, as well as for targeting business development resources.

“2. AI is a disruptive force today, not tomorrow – and business must adapt”

Adapt or be left behind is a cautionary tale, but one with plenty of evidence demonstrating that it holds true in many situations.

Lawyers and law firms as an institution are generally slow to change, often because things that disrupt the status quo are viewed through a cautionary lens. This is not surprising, given that a lawyer’s work often involves thoughtful spotting of potential risks, and finding ways to address those risks. A fast-changing business landscape racing to keep up with the latest in AI technologies may be seen as inherently risky, especially in the absence of targeted laws and regulations providing guidance, as is the case today in the AI industry. Even so, exploring how to adapt one’s law practice to a world filled with AI technologies should be near the top of every lawyer’s list of things to consider for 2018.