10 Things I Wish Every Legal Tech Pitch Would Include

Due in large part to the emergence of advanced artificial intelligence-based legal technologies, the US legal services industry today is in the midst of a tech shakeup.  Indeed, the number of advanced legal tech startups continues to increase. And so too are the opportunities for law firms to receive product presentations from those vendors.

Over the last several months, I’ve participated in several pitches and demos from leading legal tech vendors.  Typically delivered by company founders, executives, technologists, and/or sales, these presentations have been delivered live, as audio-video conferences, audio by phone with a separate web demo, or pre-recorded audio-video demos (e.g., a slide deck video with voiceover).  Often, a vendor’s lawyer will discuss how his or her company’s software addresses various needs and issues arising in one or more law firm practice areas.  Most presentations will also include statements about advanced legal tech boosting law firm revenues, making lawyers more efficient, and improving client satisfaction (ostensibly, a reminder of what’s at stake for those who ignore this latest tech trend).

Based on these (admittedly small number of) presentations, here is my list of things I wish every legal tech presentation would provide:

1. Before a presentation, I wish vendors would provide an agenda and the bios of the company’s representatives who will be delivering their pitch. I want to know what’s being covered and who’s going to be giving the presentation.  Do they have a background in AI and the law, or are they tech generalists? This helps prepare for the meeting and frame questions during Q&A (and reduces the number of follow-up conference calls).  Ideally, presenters should know their own tech inside and out and an area of law so they can show how the software makes a difference in that area. I’ve seen pitches by business persons who are really good at selling, and programmers who are really good at talking about bag-of-words bootstrapping algorithms. It seems that best person to pitch legal tech is someone who knows both the practice of law and how tech works in a typical law firm setting.

2. Presenters should know who they are talking to at a pitch and tailor accordingly.  I’m a champion for legal tech and want to know the details so I can tell my colleagues about your product.  Others just want to understand what adopting legal tech means for daily law practice. Find out who’s who and which practice group(s) or law firm function they represent and then address their specific needs.

3. The legal tech market is filling up with single-function offerings that generally perform a narrow function, so I want to understand all the ways your application might help replace or augment law firm tasks. Mention how your tech could be utilized in different practice areas where it’s best deployed (or where it could be deployed in the future in the case of features still in the development pipeline). The more capabilities an application has, the more attractive your prices begin to appear (and the fewer vendor roll-outs and training sessions I and my colleagues will have to sit through).

4. Don’t oversell capabilities. If you claim new features will be implemented soon, they shouldn’t take months to deploy. If your software is fast and easy, it had better be both, judged from an experienced attorney’s perspective. If your machine learning text classification models are not materially different than your competitors’, avoid saying they’re special or unique. On the other hand, if your application includes a demonstrable unique feature, highlight it and show how it makes a tangible difference compared to other available products in the market. Finally, if your product shouldn’t be used for high stakes work or has other limitations, I want to understand where that line should be drawn.

5. Speaking of over-selling, if I hear about an application’s performance characteristics, especially numerical values for things like accuracy, efficiency, and time saved, I want to see the benchmarks and protocols used to measure those characteristics.  While accuracy and other metrics are useful for distinguishing one product from another, they can be misleading. For example, a claim that a natural language processing model is 95% accurate at classifying text by topic should be backed up with comparisons to a benchmark and an explanation of the measurement protocol used.  A claim that a law firm was 40-60% more efficient using your legal tech, without providing details about how those figures were derived, isn’t all that compelling.

6. I want to know if your application has been adopted by top law firms, major in-house legal departments, courts, and attorneys general, but be prepared to provide data to back up claims.  Are those organizations paying a hefty annual subscription fee but only using the service a few times a month, or are your cloud servers overwhelmed by your user base? Monthly active users, API requests per domain, etc., can place usage figures in context.

7. I wish proof-of-concept testing was easier.  It’s hard enough to get law firm lawyers and paralegals interested in new legal tech, so provide a way to facilitate testing your product. For example, if you pitch an application for use in transactional due diligence, provide a set of common due diligence documents and walk through a realistic scenario. This may need to be done for different practice groups and functions at a firm, depending on the nature of the application.

8. I want to know how a legal tech vendor has addressed confidentiality, data security, and data assurance in instances where a vendor’s legal tech is a cloud-based service. If a machine learning model runs on a platform that is not behind the firm’s firewall and intrusion detection systems, that’s a potential problem in terms of safeguarding client confidential information. While vendors need to coordinate first with a firm’s CSO about data assurance/security, I also want to know the details.

9. I wish vendors would provide better information demonstrating how their applications helped others develop business. For example, tell me if using your application helped a law firm respond to a Request for Proposal (RFP) and won, or a client gave more work to a firm that demonstrated advanced legal tech acumen.  While such information may merely be anecdotal, I can probably champion legal tech on the basis of business development even if a colleague isn’t persuaded with things like accuracy and efficiency.

10. Finally, a word about design.  I wish legal tech developers would place more emphasis on UI/UX. It seems some of the offerings of late appear ready for beta testing rather than a roll-out to prospective buyers. I’ve seen demos in which a vendor’s interface contained basic formatting errors, something any quality control process would have caught. Some UIs are bland and lack intuitiveness when they should be user-friendly and have a quality look and feel. Use a unique theme and graphics style, and adopt a brand that stands out. For legal tech to succeed in the market, technology and design both must meet expectations.

[The views and opinions expressed in this post are solely the author’s and do not necessarily represent or reflect the views or opinions of the author’s employer or colleagues.]

Congress Looking at Data Science for Ways to Improve Patent Operations

When Congress passed the sweeping Leahy-Smith America Invents Act (AIA) on September 16, 2011, legislators weren’t concerned about how data analytics might improve efficiencies at one of the Commerce Department’s most data-heavy institutions: the US Patent Office. Patent reformers at the time were instead focused on curtailing patent troll litigation and conforming aspects of US patent law to those of other countries. Consequently, the Patent Office’s trove of pre-classified, pre-labeled, and semi-structured patent application and invention data–information ripe for big data analytics–remained mostly untapped at the time.

Fast forward to 2018 and Congress has finally put patent data in its cross-hairs. Now, Congress wants to see how “advanced data science analytics” techniques, such as artificial intelligence, machine learning, and other methods, could be used to analyze patent data and make policy recommendations. If enacted, the “Building Innovation Growth through Data for Intellectual Property Act” or the “BIG Data for IP Act” of 2018 (S. 2601; sponsored by Sen. Coons and Sen. Hatch) would require an investigation into how data science could help the Patent Office understand its current capabilities and whether its information technology systems need modernizing.

Those objectives, however, may be too narrow.  Silicon Valley tech companies, legal tech entrepreneurs, and even students have already seized upon the opportunities big patent data and machine learning techniques present, and, as a result, have developed interesting and useful capabilities.

Take, for example, the group of Stanford University students who in late 2011 developed a machine learning technique to automatically classify US patent applications based on an application’s written invention description. The students, part of Stanford’s CS229 Machine Learning class, proposed their solution around the same time Senators Leahy, Smith, and the rest of Congress were debating the AIA in the fall of 2011.  More recently, AI technologies used by companies like Cloem, AllPriorArt, AllPriorClaims, RoboReview, Specif.io, and others have shown how patent data and AI can augment traditional patent practitioner’s roles in the legal services industry.

Some of these AI tools may one day reduce much of the work patent practitioners have traditionally performed and could lead to fewer Examiners at the Patent Office whose jobs are to review patent applications for patentability. Indeed, some have imagined a world in which advanced machine learning models conceive inventions and prepare and file a patent application to protect those ideas without further human input.  In the future, advanced machine learning models, trained on the “prior art” patent data, could routinely examine patent applications for patentability, thus eliminating the need for costly and time-consuming inter partes reviews (a trial-like proceeding that has created much uncertainty since enactment of the AIA).

So perhaps Congress’ BIG Data for IP Act should focus less on how advanced data analytics can be used to “improve consistency, detect common sources of error, and improve productivity,” as the bill is currently written, and focus more globally on how patent data, powering new AI models, will disrupt Patent Office operations, the very nature of innovation, and how patent applications are prepared, filed, and examined.

In Your Face Artificial Intelligence: Regulating the Collection and Use of Face Data (Part II)

The technologies behind “face data” collection, detection, recognition, and affect (emotion) analysis were previously summarized. Use cases for face data, and reported concerns about the proliferation of face data collection efforts and instances of face data misuse were also briefly discussed.

In this follow-on post, a proposed “face data” definition is explored from a governance perspective, with the purpose of providing more certainty as to when heightened requirements ought to be imposed on those involved in face data collection, storage, and use.  This proposal is motivated in part by the increased risk of identity theft and other instances of misuse from unauthorized disclosure of face data, but also recognizes that overregulation could subject persons and entities to onerous requirements.

Illinois’ decade-old Biometric Information Privacy Act (“BIPA”) (740 ILCS 14/1 (2008)), which has been widely cited by privacy hawks and asserted against social media and other companies in US federal and various state courts (primarily Illinois and California), provides a starting point for a uniform face data definition. The BIPA defines “biometric identifier” to include a scan of a person’s face geometry. The scope and meaning of the definition, however, remains ambiguous despite close scrutiny by several courts. In Monroy v. Shutterfly, Inc., for example, a federal district court found that mere possession of a digital photograph of a person and “extraction” of information from such photograph is excluded from the BIPA:

“It is clear that the data extracted from [a] photograph cannot constitute “biometric information” within the meaning of the statute: photographs are expressly excluded from the [BIPA’s] definition of “biometric identifier,” and the definition of “biometric information” expressly excludes “information derived from items or procedures excluded under the definition of biometric identifiers.”

Slip. op. No. 16-cv-10984 (N.D. Ill. 2017). Despite that finding, the Monroy court concluded that a “scan of face geometry” under the statute’s definition includes a “scan” of a person’s face from a photograph (or a live scan of a person’s face geometry). Although not at issue in Monroy, the court did not address whether that BIPA applies when a scan of any part of a person’s face geometry from an image is insufficient to identify the person in the image. That is, the Monroy holding arguably applies to any data made by a scan, even if that data by itself cannot lead to identifying anyone.

By way of comparison, the European Union’s General Data Protection Regulation (GDPR), which governs “personal data” (i.e., any information relating to an identified or identifiable natural person), will regulate biometric information when it goes into effect in late May 2018. Like the BIPA, the GDPR will place restrictions on “personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data” (GDPR, Article 4) (emphasis added).  Depending on how EU nation courts interpret the GDPR generally, and Article 4 specifically, a process that creates any biometric data that relates to, or could lead to, or that allows one to identify a person, or allows one to confirm an identity of a person, is a potentially covered process under the GDPR.

Thus, to enhance clarity for potentially regulated individuals and companies dealing with US citizens, “face data” could be defined, as set forth below, in a way that considers a minimum quantity or quality of data below which a regulated entity would not be within the scope of the definition (and thus not subject to regulation):

“Face data” means data in the possession or control of a regulated entity obtained from a scan of a person’s face geometry or face attribute, as well as any information and data derived from or based on the geometry or attribute data, if in the aggregate the data in the possession or control of the regulated entity is sufficient for determining an identity of the person or the person’s emotional (physiological) state.

The term “determining an identity of the person or the person’s emotional (physiological) state” relates to any known computational or manual technique for identifying a person or that person’s emotions.

The term “is sufficient” is interpretable; it would need to be defined explicitly (or, as is often the case in legislation, left for the courts to fully interpret). The intent of “sufficient” is to permit the anonymization or deletion of data following the processing of video signals or images of a person’s face to avoid being categorized as possessing regulated face data (to the extent probabilistic models and other techniques could not be used to later de-anonymize or reconstruct the missing data and identify a person or that person’s emotional state). The burden of establishing the quality and quantity of face data that is insufficient for identification purposes should rest with the regulated entity that possesses or controls face data.

Face data could include data from the face of a “live” person captured by a camera (e.g., surveillance) as well as data extracted from existing media (e.g., stored images). It is not necessary, however, for the definition to encompass the mere virtual depiction or display of a person in a live video or existing image or video. Thus, digital pictures of friends or family on a personal smartphone would not be face data, and the owner of the phone should not be a regulated entity subject to face data governance. An app on that smartphone, however, that uses face detection algorithms to process the pictures for facial recognition and sends that data to a remote app server for storage and use (e.g., for extraction of emotion information) would create face data.

By way of other examples, a process involving pixel-level data extracted from an image (a type of “scan”) by a regulated entity  would create face data if that data, combined with any other data possessed or controlled by the entity, could be used in the aggregate to identify the person in the image or that person’s emotional state. Similarly, data and information reflecting changes in facial expressions by pixel-level comparisons of time-slice images from a video (also a type of scan) would be information derived from face data and thus would be regulated face data, assuming the derived data combined with other data owned or possessed could be used to identify the person in the image or the person’s emotional state.

Information about the relative positions of facial points based on facial action units could also be data derived from or based on the original scan and thus would be face data, assuming again that the data, combined with any other data possessed by a regulated entity, could be used to identify a person or that person’s emotional state. Classifications of a person’s emotional state (e.g., joy, surprise) based on extracted image data would also be information derived from or based on a person’s face data and thus would also be face data.

Features extracted using deep learning convolutions of an image of a person’s face could also be face data if the convolution information along with other data in the possession or control of a regulated entity could be used to identify a person or that person’s emotional state.

For banks and other institutions that use face recognition for authentication purposes, sufficient face data would obviously need to be in the banks possession at some point in time to positively identify a customer making a transaction. This could subject the institution to face data governance during that time period. In contrast, a social media platform that permits users to upload images of people but does not scan or otherwise process the images (such as by cross-referencing other existing data) would not create face data and thus would not subject the platform to face data governance, even if it also possessed tagged images of the same individuals in the uploaded images. Thus, the mere possession or control over images, even if the images could potentially contain identifying information, would not constitute face data. But, if a platform were to scan (process) the uploaded images for identification purposes or sell or provide the images uploaded by users to a third party that scans the images to extract face geometry or attributes data for purposes such as targeted advertising, could subject the platform and the third party to face data governance.

The proposed face data definition, which could be modified to include “body data” and “voice data,” is merely one example that US policymakers and stakeholders might consider in the course of assessing the scope of face data governance in the US.  The definition does not exclude the possibility that any number of exceptions, exclusions, and limitations could be implemented to avoid reaching actors and actions that should not be covered, while also maintaining consistency with existing laws and regulations. Also, the proposed definition is not intended to directly encompass specific artificial intelligence technologies used or created by a regulated entity to collect and use face data, including the underlying algorithms, models, networks, settings, hyper-parameters, processors, source code, etc.

In a follow-on post, possible civil penalties for harms caused by face data collection, storage, and use will be briefly considered, along with possible defenses a regulated person or entity may raise in litigation.

Patenting Artificial Intelligence Technology: 2018 Continues Upward Innovation Trend

If the number of patents issued in the first quarter of 2018 is any indication, artificial intelligence technology companies were busy a few years ago filing patents for machine learning inventions.

According to US Patent and Trademark Office records, the number of US “machine learning” patents issued to US applicants during the first quarter of 2018 rose 17% compared to the same time period in 2017. The number of US “machine learning” patents issued to any applicant (not just US applicants) rose nearly 19% during the same comparative time period. Mostly double-digit increases were also observed in the case of US origin and total US patents mentioning “neural network” or “artificial intelligence.” Topping the list of companies obtaining patents were IBM, Microsoft, Amazon, Google, and Intel.

The latest patent figures include any US issued patent in which “machine learning,” “artificial intelligence,” or “neural network” is mentioned in the patent’s invention description (to the extent those mentions were ancillary to the invention’s disclosed utility, the above figures are over-inclusive). Because patent applications may spend 1-3 years at the US Patent Office (or more, if claiming priority to earlier-filed patent applications), the Q1 2018 numbers are reflective of innovation activity possibly several years ago.

Republicans Propose Commission to Study Artificial Intelligence Impacts on National Security

Three Republican members of Congress are co-sponsoring a new bill (H.R. 5356) “To establish the National Security Commission on Artificial Intelligence.” Introduced by Rep. Stefanik (R-NY) on March 20, 2018, the bill would create a temporary 11-member Commission tasked with producing an initial report followed by comprehensive annual reports, each providing issue-specific recommendations about national security needs and related risks from advances in artificial intelligence, machine learning, and associated technologies.

Issues the Commission would review include AI competitiveness in the context of national and economic security, means to maintain a competitive advantage in AI (including machine learning and quantum computing), other country AI investment trends, workforce and education incentives to boost the number of AI workers, risks of advances in the military employment of AI by foreign countries, ethics, privacy, and data security, among others.

Unlike other Congressional bills of late (see H.R. 4625–FUTURE of AI Act; H.R. 4829–AI JOBS Act) that propose establishing committees under Executive Branch departments and constituted with both government employees and private citizens, H.R. 5356 would establish an independent Executive Branch commission made up exclusively of Federal employees appointed by Department of Defense and various Armed Services Committee members, with no private citizen members (ostensibly because of national security and security clearance issues).

Congressional focus on AI technologies has generally been limited to highly autonomous vehicles and vehicle safety, with other areas, such as military impacts, receiving much less attention. By way of contrast, the UK’s Parliament seems far ahead. The UK Parliament Select Committee on AI has already met over a dozen times since mid-2017 and its members have convened numerous public meetings to hear from dozens of experts and stakeholders representing various disciplines and economic sectors.

In Your Face Artificial Intelligence: Regulating the Collection and Use of Face Data (Part I)

Of all the personal information individuals agree to provide companies when they interact with online or app services, perhaps none is more personal and intimate than a person’s facial features and their moment-by-moment emotional states. And while it may seem that face detection, face recognition, and affect analysis (emotional assessments based on facial features) are technologies only sophisticated and well-intentioned tech companies with armies of data scientists and stack engineers are competent to use, the reality is that advances in machine learning, microprocessor technology, and the availability of large datasets containing face data have lowered entrance barriers to conducting robust face detection, face recognition, and affect analysis to levels never seen before.

In fact, anyone with a bit of programming knowledge can incorporate open-source algorithms and publicly available image data, train a model, create an app, and start collecting face data from app users. At the most basic entry point, all one really needs is a video camera with built-in face detection algorithms and access to tagged images of a person to start conducting facial recognition. And several commercial API’s exist making it relatively easy to tap into facial coding databases for use in assessing other’s emotional states from face data. If you’re not persuaded by the relative ease at which face data can be captured and used, just drop by any college (or high school) hackathon and see creative face data tech in action.

In this post, the uses of face data are considered, along with a brief summary of the concerns raised about collecting and using face and emotional data. Part II will explore options for face data governance, which include the possibility of new or stronger laws and regulations and policies that a self-regulating industry and individual stakeholders could develop.

The many uses of our faces

Today’s mobile and fixed cameras and AI-based face detection and recognition software enable real-time controlled access to facilities and devices. The same technology allows users to identify fugitive and missing persons in surveillance videos, private citizens interacting with police, and unknown persons of interest in online images.

The technology provides a means for conducting and verifying commercial transactions using face biometric information, tracking people automatically while in public view, and extracting physical traits from images and videos to supplement individual demographic, psychographic, and behavioristic profiles.

Face software and facial coding techniques and models are also making it easier for market researchers, educators, robot developers, and autonomous vehicle safety designers to assess emotional states of people in human-machine interactions.

These and other use cases are possible in part because of advances in camera technology, the proliferation of cameras (think smart phones, CCTVs, traffic cameras, laptop cameras, etc.) and social media platforms, where millions of images and videos are created and uploaded by users every day. Increased computer processing power has led to advances in face recognition and affect-based machine learning research and improved the ability of complex models to execute faster. As a result, face data is relatively easy to collect, process, and use.

One can easily image the many ways face data might be abused, and some of the abuses have already been reported. Face data and machine learning models have been improperly used to create pornography, for example, and to track individuals in stores and other public locations without notice and without seeking permission. Models based on face data have been reportedly developed for no apparent purpose other than for predictive classification of beauty and sexual orientation.

Face recognition models are also subject to errors. Misidentification, for example, is a weakness of face recognition and affect-based models. In fact, despite improvements, face recognition is not perfect. This can translate into false positive identifications. Obviously, tragic consequences can occur if the police or government agencies make decisions based on a false positive (or false negative) identity of a person.

Face data models have been shown to perform more accurately on persons with lighter skin color. And affect models, while raising fewer concerns compared to face recognition due mainly to the slower rate of adoption of the technology, may misinterpret emotions if culture, geography, gender, and other factors are not accounted for in training data.

Of course, instances of reported abuse, bias, and data breaches overshadow the many unreported positive uses and machine learning applications of face data. But as is often the case, problems tend to catch the eyes of policymakers, regulators, and legislators, though overreaction to hyped problems can result in a patchwork of regulations and standards that go beyond addressing the underlying concerns and cause unintended effects, such as possibly stifling innovation and reducing competitiveness.

Moreover, reactionary regulation doesn’t play well with fast-moving disruptive tech, such as face recognition and affective computing, where the law seems to always be in catch-up mode. Compounding the governance problem is the notion that regulators and legislators are not crystal ball readers who can see into the future. Indeed, future uses of face data technologies may be hard to imagine today.

Even so, what matters to many is what governments and companies are doing with still images and videos, and specifically how face data extracted from media are being used, sometimes without consent. These concerns raise questions of transparency, privacy laws, terms of service and privacy policy agreements, data ownership, ethics, and data breaches, among others. They also implicate issues of whether and when federal and state governments should tighten existing regulations and impose new regulations where gaps exist in face data governance.

With recent data breaches making headlines and policymakers and stakeholders gathering in 2018 to examine AI’s impacts, there is no better time than now to revisit the need for stronger laws and to develop new technical- and ethical-based standards and guidelines applicable to face data. The next post will explore these issues.

Regulating Artificial Intelligence Technologies by Consensus

As artificial intelligence technologies continue to transform industries, several prominent voices in the technology community are calling for regulating AI to get ahead of what they see as AI’s actual and potential social and economic impacts. These calls for action follow reports of machine learning classification bias, instances of open source AI tools being misused, lack of transparency in AI algorithms, privacy and data security issues, and forecasts of workforce impacts as AI technologies spread.

Those advocating for strong state or federal legislative action around AI, however, may be disappointed by the rate at which policymakers in the US are tackling sensitive issues. But they may be even more disappointed by recent legislative efforts suggesting that AI technologies will not be regulated in the traditional sense, but instead may be governed through a process of consensus building without targeted and enforceable standards. This form of technological governance–often called “soft law”–is not new. In some industries, soft law governance has evolved and taken over the more traditional command and control “hard law” governance approach.

Certain transformative technologies like AI evolve faster than policymaker’s ability to keep up and as a result, at least in the US, AI’s future may not be tied to traditional legislative lawmaking, notice and rulemaking, and regulation by multiple government agencies whose missions include overseeing specific industry activities. According to those who have studied this trend, the hard law approach is gradually dying when it comes to certain tech, with the exception of technologies in highly-regulated segments such as autonomous vehicles (e.g., safety regulations) and fintech (e.g., regulatory oversight of distributed ledger tech and cryptocurrencies). Instead, an industry-led self-regulatory multistakeholder process is emerging whereby participants, including government policymakers, come up with consensus-based standards and processes that form a framework for regulating industry activities.

This process is already apparent when it comes to AI. Organizations like the IEEE have produced consensus-style standards for ethical considerations in the design and development of AI systems, and private companies are publishing their views on how they and others can self-regulate their activities, products, and services in the AI space. That is not to say that policymakers will play no role in the governance of AI. The US Congress and New York City, for example, are considering or in the process of implementing multistakeholder task forces for tackling the future of AI, workforce and education issues, and harms caused by machine learning algorithms.

A multistakeholder approach to regulating AI technologies is less likely to stifle innovation and competitiveness compared to a hard law prescriptive approach, which could involve numerous regulatory requirements, inflexible standards, and civil penalties for violations. But some view hard law governance as providing a measure of predictability that consensus approaches cannot duplicate. If multistakeholder governance is in AI’s future, stakeholders will need to develop and adopt meaningful standards and the industry will need to demonstrate a willingness to be held accountable in ways that go beyond simply appeasing vocal opponents and assuaging negative public sentiment toward AI. If they don’t, legislators may feel pressure to take a more hard law tact with AI technologies.

Industry Focus: The Rise of Data-Driven Health Tech Innovation

Artificial intelligence-based healthcare technologies have contributed to improved drug discoveries, tumor identification, diagnosis, risk assessments, electronic health records (EHR), and mental health tools, among others. Thanks in large part to AI and the availability of health-related data, health tech is one of the fastest growing segments of healthcare and one of the reasons why the sector ranks highest on many lists.

According to a 2016 workforce study by Georgetown University, the healthcare industry experienced the largest employment growth among all industries since December 2007, netting 2.3 million jobs (about an 8% increase). Fourteen percent of all US workers work in healthcare, making it the country’s largest employment center. According to the latest government figures, the US spends the most on healthcare per person ($10,348) than any other country. In fact, healthcare spending is nearly 18 percent of the US gross domestic product (GDP), a figure that is expected to increase. The healthcare IT segment is expected to grow at a CAGR greater than 10% through 2019. The number of US patents issued in 2017 for AI-infused healthcare-related inventions rose more than 40% compared to 2016.

Investment in health tech has led to the development of some impressive AI-based tools. Researchers at a major university medical center, for example, invented a way to use AI to identify from open source data the emergence of health-related events around the world. The machine learning system they’d created extracted useful information and classified it according to disease-specific taxonomies. At the time of its development ten years ago, the “supervised” and “unsupervised” natural language processing models were leaps ahead of what others were using at the time and earned the inventors national recognition. More recently, medical researchers have created a myriad of new technologies from innovative uses of machine learning technologies.

What most of the above and other health tech innovations today have in common is what drives much of the health tech sector: lots of data. Big data sets, especially labeled data, are needed by AI technologists to train and test machine learning algorithms that produce models capable of “learning” what to look for in new data. And there is no better place to find big data sets than in the healthcare sector. According to an article last year in the New England Journal of Medicine, by 2012 as much as 30% of the world’s stored data was being generated in the healthcare industry.

Traditional healthcare companies are finding value in data-driven AI. Biopharmaceutical company Roche’s recent announcement that it is acquiring software firm Flatiron Health Inc. for $1.9 billion illustrates the value of being able to access health-related data. Flatiron, led by former Google employees, makes software for real-time acquisition and analysis of oncology-specific EHR data and other structured and unstructured hospital-generated data for diagnostic and research purposes. Roche plans to leverage Flatiron’s algorithms–and all of its data–to enhance Roche’s ability to personalize healthcare strategies by way of accelerating the development of new cancer treatments. In a world powered by AI, where data is key to building new products that attract new customers, Roche is now tapped into one of the largest sources of labeled data.

Companies not traditionally in healthcare are also seeing opportunities in health-related data. Google’s AI-focused research division, for example, recently reported in Nature a promising use of so-called deep learning algorithms (a computation network structured to mimic how neurons fire in the brain) to make cardiovascular risk predictions from retinal image data. After training their model, Google scientists said they were able to identify and quantify risk factors in retinal images and generate patient-specific risk predictions.

The growth of available healthcare data and the infusion of AI health tech in the healthcare industry will challenge companies to evolve. Health tech holds the promise of better and more efficient research, manufacturing, and distribution of healthcare products and services, though some have also raised concerns about who will benefit most from these advances, bias in data sets, anonymizing data for privacy reasons, and other legal issues that go beyond healthcare, issues that will need to be addressed.

To be successful, tomorrow’s healthcare leaders may be those who have access to data that drives innovation in the health tech segment. This may explain why, according to a recent survey, healthcare CIOs whose companies plan spending increases in 2018 indicated that their investments will likely be directed first toward AI and related technologies.

“AI vs. Lawyers” – Interesting Result, Bad Headline

The recent clickbait headline “AI vs. Lawyers: The Ultimate Showdown” might lead some to believe that an artificial intelligence system and a lawyer were dueling adversaries or parties on opposite sides of a legal dispute (notwithstanding that an “intelligent” machine has not, as far as US jurisprudence is concerned, been recognized as having machine rights or standing in state or federal courts).

Follow the link, however, and you end up at LawGeex’s report titled “Comparing the Performance of Artificial Intelligence to Human Lawyers in the Review of Standard Business Contracts.” The 37-page report details a straightforward, but still impressive, comparison of the accuracy of machine learning models and lawyers in the course of performing a common legal task.

Specifically, LawGeex set out to consider, in what they call a “landmark” study, whether an AI-based model or skilled lawyers are better at issue spotting while reviewing Non-Disclosure Agreements (NDAs).

Issue spotting is a task that paralegals, associate attorneys, and partners at law firms and corporate legal departments regularly perform. It’s a skill learned early in one’s legal career and involves applying knowledge of legal concepts and issues to identify, in textual materials such as contract documents or court opinions, specific and relevant facts, reasoning, conclusions, and applicable laws or legal principles of concern. Issue spotting in the context of contract review may simply involve locating a provision of interest, such as a definition of “confidentiality” or an arbitration requirement in the document.

Legal tech tool using machine learning algorithms have proliferated in the last couple of years. Many involve combinations of AI technologies and typically required processing thousands of documents (often “labeled” by category or type of document) to create a model that “learns” what to look for in the next document that it processes. In the LawGeex’s study, for example, its model was trained on thousands of NDA documents. Following training, it processed five new NDAs selected by a team of advisors while 20 experienced contract attorneys were given the same five documents and four hours to review.

The results were unsurprising: LawGeex’s trained model was able to spot provisions, from a pre-determined set of 30 provisions, at a reported accuracy of 94% compared to an average of 85% for the lawyers (the highest-performing lawyer, LawGeex noted, had an accuracy of 94%, equaling the software).

Notwithstanding the AI vs. lawyers headline, LawGeek’s test results raise the question of whether the task of legal issue spotting in NDA documents has been effectively automated (assuming a mid-nineties accuracy is acceptable). And do machine learning advances like these generally portend other common tasks lawyers perform someday being performed by intelligent machines?

Maybe. But no matter how sophisticated AI tech becomes, algorithms will still require human input. And algorithms are a long way from being able to handle a client’s sometimes complex objectives, unexpected tactics opposing lawyers might deploy in adversarial situations, common sense, and other inputs that factor into a lawyer’s context-based legal reasoning and analysis duties. No AI tech is currently able to handle all that. Not yet anyway.

A Proposed AI Task Force to Confront Talent Shortage and Workforce Changes

Just over a month after House and Senate commerce committees received companion bills recommending a federal task force to globally examine the “FUTURE” of Artificial Intelligence in the United States (H.R. 4625; introduced Dec. 12, 2017), a House education and workforce committee is set to consider a bill calling for a task force assessment of the impacts of AI technologies on the US workforce.

If enacted, the “Artificial Intelligence Job Opportunities and Background Summary Act of 2018,” or the “AI JOBS Act of 2018” (H.R. 4829; introduced Jan. 18, 2018), would require the Secretary of Labor to report on impacts and growth of AI, industries and workers who may be most impacted by AI, expertise and education needed in an AI economy (compared to today), an identification of workers who will experience expanded career opportunities from AI and those who may be vulnerable to career displacement, and ways to alleviate workforce displacement and prepare a future AI workforce.

Assessing these issues now is critical. Former Senator Tom Daschle and David Beier, in a recent opinion published in The Hill, see a “dramatic set of changes” in the nature of work in America as AI technologies become more entrenched in the US economy. Citing a McKinsey’s Global Institute’s study of 800 occupations, Daschle and Beier conclude that AI technologies will not cause net job losses. Rather, job losses will likely be offset by job changes and gains in fields such as healthcare, infrastructure development, energy, and in fields that do not exist today. They cite Gartner Research estimates suggesting millions of new jobs will be created directly or indirectly as a result of the AI economy.

Already there are more AI-related jobs than high-skilled workers to fill them. One popular professional networking site currently lists over 6,000 “artificial intelligence” jobs. Chinese internet giant Tencent estimates there are only 300,000 AI experts worldwide (recent estimates by Toronto-based Element AI puts that figure at merely 90,000 AI experts). In testimony this week before a House Information Technology subcommittee, Intel’s CTO Amir Khosrowshahi said that, “Workers need to have the right skills to create AI technologies and right now we have too few workers to do the job.” Huge salaries for newly-minted computer science PhDs will drive more to the field, but job openings are likely to outpace available talent even as record numbers of students enroll in machine learning and related AI classes at top US universities.

If AI job gains shift workers disproportionately toward high-skilled jobs, the result may be continued job opportunity inequality. A 2016 study by Georgetown University’s Center on Education and the Workforce found that “out of the 11.6 million jobs created in the post-recession economy, 11.5 million went to workers with at least some college education.” The study authors found that, since 2008, graduate degree workers had the most job gains (83%), predominantly in high-skill occupations, and college graduates saw the next highest job gains (57%), also in high-skill jobs. The highest job growth was seen in management, healthcare, and computer and mathematical sciences. These same fields are prime for a future influx of highly-skilled AI workers.

The US is not alone in raising concerns about job and workforce changes in an AI economy. The UK Parliament’s Artificial Intelligence Committee, for example, is confronting challenges in re-educating UK’s workforce to improve skills needed to work alongside AI systems. The US may need to do more to catch up, according to Mr. Khosrowshahi. “Current federal funding levels [in tech education],” he argued, “are not keeping pace with the rest of the industrialized world.”

The AI JOBS Act of 2018 presents an opportunity for US policymakers to develop novel approaches to address expected workforce shifts caused by an AI economy. If nothing is done, the US could find itself at a competitive disadvantage with increasing economic inequality.