Senate-Passed Defense Authorization Bill Funds Artificial Intelligence Programs

The Senate-passed national defense appropriations bill (H.R.5515, as amended), to be known as the John S. McCain National Defense Authorization Act for Fiscal Year 2019, includes spending provisions for several artificial intelligence technology programs.

Passed by a vote of 85-10 on June 18, 2018, the bill would include appropriations for the Department of Defense “to coordinate the efforts of the Department to develop, mature, and transition artificial intelligence technologies into operational use.” A designated Coordinator will serve to oversee joint activities of the services in the development of a Strategic Plan for AI-related research and development.  The Coordinator will also facilitate the acceleration of development and fielding of AI technologies across the services.  Notably, the Coordinator is to develop appropriate ethical, legal, and other policies governing the development and use of AI-enabled systems in operational situations. Within one year of enactment, the Coordinator is to complete a study on the future of AI in the context of DOD missions, including recommendations for integrating “the strengths and reliability of artificial intelligence and machine learning with the inductive reasoning power of a human.”

In other provisions, the Director of the Defense Intelligence Agency (DIA; based in Ft. Meade, MD) is tasked with submitting a report to Congress within 90 days of enactment that directly compares the capabilities of the US in emerging technologies (including AI) and the capabilities of US adversaries in those technologies.

The bill would require the Under Secretary for R&D to pilot the use of machine-vision technologies to automate certain human weapons systems manufacturing tasks. Specifically, tests would be conducted to assess whether computer vision technology is effective and at a level of readiness to perform the function of determining the authenticity of microelectronic parts at the time of creation through final insertion into weapon systems.

The Senate version of the 2019 appropriations bill replaces an earlier House version (passed 351-66 on May 24, 2018).

At the Intersection of AI, Face Swapping, Deep Fakes, Right of Publicity, and Litigation

Websites like GitHub, Reddit and others offer developers and hobbyists dozens of repositories containing artificial intelligence deep learning models, instructions for their use, and forums for learning how to “face swap,” a technique used to automatically replace a face of a person in a video with that of a different person. Older versions of face swapping, primarily used on images, have been around for years in the form of entertaining apps that offered results with unremarkable quality (think cut and paste at its lowest, and photoshop editing at a higher level). With the latest AI models, however, including deep neural networks, a video with a face-swapped actor–so-called “deep fake” videos–may appear so seamless and uncanny as to fool even the closest of inspections, and the quality is apparently getting better.

With only subtle clues to suggest an actor in one of these videos is fake, the developers behind them have become the target of criticism, though much of the criticism has also been leveled generally at the AI tech industry, for creating new AI tools with few restrictions on potential uses beyond their original intent.  These concerns have now reached the halls of New York’s state legislative body.

New York lawmakers are responding to the deep fake controversy, albeit in a narrow way, by proposing to make it illegal to use “digital replicas” of individuals without permission, a move that would indirectly regulate AI deep learning models. New York Assembly Bill No. A08155 (introduced in 2017, amended Jun. 5, 2018) is aimed at modernizing New York’s right of publicity law (N.Y. Civ. Rights Law §§ 50 and 50-1)–one of the nation’s oldest publicity rights laws that does not provide post-mortem publicity rights–though it may do little to curb the broader proliferation of face swapped and deep fake videos. In fact, only a relatively small slice of primarily famous New York actors, artists, athletes, and their heirs and estates would benefit from the proposed law’s digital replicas provision.

If enacted, New York’s right of publicity law would be amended to address computer-generated or electronic reproductions of a living or deceased individual’s likeness or voice that “realistically depicts” the likeness or voice of the individual being portrayed (“realistic” is undefined). Use of a digital replica would be a violation of the law if done without the consent of the individual, if the use is in a scripted audiovisual or audio work (e.g., movie or sound recording), or in a live performance of a dramatic work, that is intended to and creates the clear impression that the individual represented by the digital replica is performing the activity for which he or she is known, in the role of a fictional character.

It would also be a violation of the law to use a digital replica of a person in a performance of a musical work that is intended to and creates the clear impression that the individual represented by the digital replica is performing the activity for which he or she is known, in such musical work.

Moreover, it would be a violation to use a digital replica of a person in an audiovisual work that is intended to and creates the clear impression that an athlete represented by the digital replica is engaging in an athletic activity for which he or she is known.

The bill would exclude, based on First Amendment principles, a person’s right to control their persona in cases of parody, satire, commentary, and criticism; political, public interest, or newsworthy situations, including a documentary, regardless of the degree of fictionalization in the work; or in the case of de minimis or incidental uses.

In the case of deep fake digital replicas, the bill would make it a violation to use a digital replica in a pornographic work if done without the consent of the individual if the use is in an audiovisual pornographic work in a manner that is intended to and creates the impression that the individual represented by the digital replica is performing.

Similar to the safe harbor provisions in other statutes, the New York law would provide limited immunity to any medium used for advertising including, but not limited to, newspapers, magazines, radio and television networks and stations, cable television systems, billboards, and transit advertising, that make unauthorized use of an individual’s persona for the purpose of advertising or trade, unless it is established that the owner or employee had knowledge of the unauthorized use, through presence or inclusion, of the individual’s persona in such advertisement or publication.

Moreover, the law would provide a private right of action for an injured party to sue for an injunction and to seek damages. Statutory damages in the amount of $750 would be available, or compensatory damages, which could be significantly higher.  The finder of fact (judge or jury) could also award significant “exemplary damages,” which could be substantial, to send a message to others not to violate the law.

So far, AI tech developers have largely avoided direct legislative or regulatory action targeting their AI technologies, in part because some have taken steps to self-regulate, which may be necessary to avoid the confines of command and control-style state or federal regulatory schemes that would impose standards, restrictions, requirements, and the right to sue to collect damages and collect attorneys’ fees. Tech companies efforts at self-regulating, however, have been limited to expressing carefully-crafted AI policies for themselves and their employees, as well as taking a public stance on issues of bias, ethics, and civil rights impacts from AI machine learning. Despite those efforts, more laws like New York’s may be introduced at the state level if AI technologies are used in ways that have questionable utility or social benefits.

For more about the intersection of right of publicity laws and regulating AI technology, please see an earlier post on this website, available here.

California Jury to Decide if Facebook’s Deep Learning Facial Recognition Creates Regulated Biometric Information

Following a recent decision issued by Judge James Donato of the U.S. District Court for the Northern District of California, a jury to be convened in San Francisco in July will decide whether a Facebook artificial intelligence technology creates regulated “biometric information” under Illinois’ Biometric Information Privacy Act (BIPA).  In some respects, the jury’s decision could reflect general sentiment toward AI during a time when vocal opponents of AI have been widely covered in the media.  The outcome could also affect how US companies, already impacted by Europe’s General Data Protection Regulation (GDPR), view their use of AI technologies to collect and process user-supplied data. For lawyers, the case could highlight effective litigation tactics in highly complex AI cases where black box algorithms are often unexplainable and lack transparency, even to their own developers.

What’s At Stake? What Does BIPA Cover?

Uniquely personal biometric identifiers, such as a person’s face and fingerprints, are often seen as needing heightened protection from hackers due to the fact that, unlike a stolen password that one can reset, a person cannot change their face or fingerprints if someone makes off with digital versions and uses them to steal the person’s identity or gain access to the person’s biometrically-protected accounts, devices, and secure locations. The now 10-year old BIPA (740 ILCS 14/1 (2008)) was enacted to ensure users are made aware of instances when their biometric information is being collected, stored, and used, and to give users the option to opt out. The law imposes requirements on companies and penalties for non-compliance, including liquidated and actual damages. At issue here, the law addresses “a scan” of a person’s “face geometry,” though it falls short of explicitly defining those terms.

Facebook users voluntarily upload to their Facebook accounts digital images depicting them, their friends, and/or family members. Some of those images are automatically processed by an AI technology to identify the people in the images. Plaintiffs–here, putative class action individuals–argue that Facebook’s facial recognition feature involves a “scan” of a person’s “face geometry” such that it collects and stores biometric data in violation of BIPA.

Summary of the Court’s Recent Decision

In denying the parties’ cross-motions for summary judgment and allowing the case to go to trial, Judge Donato found that the Plaintiffs and Facebook “offer[ed] strongly conflicting interpretations of how the [Facebook] software processes human faces.” See In Re Facebook Biometric Information Privacy Litigation, slip op. (Dkt. 302), No. 3:15-cv-03747-JD (N.D. Cal. May 14, 2018). The Plaintiffs, he wrote, argued that “the technology necessarily collects scans of face geometry because it uses human facial regions to process, characterize, and ultimately recognize face images.” On the other hand, “Facebook…says the technology has no express dependency on human facial features at all.”

Addressing Facebook’s interpretation of BIPA, Judge Donato considered the threshold question of what BIPA’s drafters meant by a “scan” in “scan of face geometry.” He rejected Facebook’s suggestion that BIPA relates to an express measurement of human facial features such as “a measurement of the distance between a person’s eyes, nose, and ears.” In doing so, he relied on extrinsic evidence in the form of dictionary definitions, specifically Merriam-Webster’s 11th, for an ordinary meaning of “to scan” (i.e., to “examine” by “observation or checking,” or “systematically . . . in order to obtain data especially for display or storage”) and “geometry” (in everyday use, means simply a “configuration,” which in turn denotes a “relative arrangement of parts or elements”).  “[N]one of these definitions,” the Judge concluded, “demands actual or express measurements of spatial quantities like distance, depth, or angles.”

The Jury Could Face a Complex AI Issue

Digital images contain a numerical representation of what is shown in the image, specifically the color (or grayscale), transparency, and other information associated with each pixel of the image. An application running on a computer can render the image on a display device by reading the file data to identify what color or grayscale level each pixel should display. When one scans a physical image or takes a digital photo with a smartphone, they are systematically generating this pixel-level data. Digital image data may be saved to a file having a particular format designated by a file extension (e.g., .GIF, .JPG, .PNG, etc.).

A deep convolutional neural network–a type of AI–can be used to further process a digital image file’s data to extract features from the data. In a way, the network replicates a human cognitive process of manually examining a photograph. For instance, when we examine a face in a photo, we take note of features and attributes, like a nose and lip shape and their contours as well as eye color and hair. Those and other features may help us recall from memory whose face we are looking at even if we have never seen the image before.

A deep neural network, once it is fully trained using many different face images, essentially works in a similar manner. After processing image file data to extract and “recognize” features, the network uses the features to classify the image by associating it with an identity, assuming it has “seen” the face before (in which case it may compare the extracted features to a template image of the face, preferably several images of the face). Thus, a digital image file may contain a numerical representation of what is shown in the image, and a deep neural network creates a numerical representation of features shown in the digital image to perform classification.  A question for the jury, then, may involve deciding if the processing of uploaded digital images using a deep convolutional neural network involves “a scan” of “a person’s face geometry.” This question will challenge the parties and their lawyers to assist the jury to understand digital image files and the nuances of AI technology.

For Litigators, How to Tackle AI and Potential AI Bias?

The particulars of advanced AI have not been central to a major federal jury case to date.  Thus, the Facebook case offers an opportunity to evaluate a jury’s reaction to a particular AI technology.

In its summary judgment brief, Facebook submitted expert testimony that its AI “learned for itself what features of an image’s pixel values are most useful for the purposes of characterizing and distinguishing images of human faces” and it “combines and weights different combinations of different aspects of the entire face image’s pixel value.” This description did not persuade Judge Donato to conclude that an AI with “learning” capabilities escapes BIPA’s reach, at least not as a matter of law.  Whether it will be persuasive to a jury is an open question.

It is possible some potential jurors may have preconceived notions about AI, given the hype surrounding use cases for the technology.  Indeed, outside the courthouse, AI’s potential dark side and adverse impacts on society have been widely reported. Computer vision-enabled attack drones, military AI systems, jobs being taken over by AI-powered robots, algorithmic harm due to machine learning bias, and artificial general intelligence (AGI) taking over the world appear regularly in the media.  If bias for and against AI is not properly managed, the jury’s final decision might be viewed by some as a referendum on AI.

For litigators handling AI cases in the future, the outcome of the Facebook case could provide a roadmap for effective trial strategies involving highly complex AI systems that defy simple description.  That is not to say that the outcome will create a new paradigm for litigating tech. After all, many trials involve technical experts who try to explain complex technologies in a way that is impactful on a jury. For example, complex technology is often the central dispute in cases involving intellectual property, medical malpractice, finance, and others.  But those cases usually don’t involve technologies that “learn” for themselves.

How Will the Outcome Affect User Data Collection?

The public is becoming more aware that tech companies are enticing users to their platforms and apps as a way to generate user-supplied data. While the Facebook case itself may not usher in a wave of new laws and regulations or even self-policing by the tech industry aimed at curtailing user data collection, a sizeable damages award from the jury could have a measured chilling effect. Indeed, some companies may be more transparent about their data collection and provide improved notice and opt-out mechanisms.

10 Things I Wish Every Legal Tech Pitch Would Include

Due in large part to the emergence of advanced artificial intelligence-based legal technologies, the US legal services industry today is in the midst of a tech shakeup.  Indeed, the number of advanced legal tech startups continues to increase. And so too are the opportunities for law firms to receive product presentations from those vendors.

Over the last several months, I’ve participated in several pitches and demos from leading legal tech vendors.  Typically delivered by company founders, executives, technologists, and/or sales, these presentations have been delivered live, as audio-video conferences, audio by phone with a separate web demo, or pre-recorded audio-video demos (e.g., a slide deck video with voiceover).  Often, a vendor’s lawyer will discuss how his or her company’s software addresses various needs and issues arising in one or more law firm practice areas.  Most presentations will also include statements about advanced legal tech boosting law firm revenues, making lawyers more efficient, and improving client satisfaction (ostensibly, a reminder of what’s at stake for those who ignore this latest tech trend).

Based on these (admittedly small number of) presentations, here is my list of things I wish every legal tech presentation would provide:

1. Before a presentation, I wish vendors would provide an agenda and the bios of the company’s representatives who will be delivering their pitch. I want to know what’s being covered and who’s going to be giving the presentation.  Do they have a background in AI and the law, or are they tech generalists? This helps prepare for the meeting and frame questions during Q&A (and reduces the number of follow-up conference calls).  Ideally, presenters should know their own tech inside and out and an area of law so they can show how the software makes a difference in that area. I’ve seen pitches by business persons who are really good at selling, and programmers who are really good at talking about bag-of-words bootstrapping algorithms. It seems that best person to pitch legal tech is someone who knows both the practice of law and how tech works in a typical law firm setting.

2. Presenters should know who they are talking to at a pitch and tailor accordingly.  I’m a champion for legal tech and want to know the details so I can tell my colleagues about your product.  Others just want to understand what adopting legal tech means for daily law practice. Find out who’s who and which practice group(s) or law firm function they represent and then address their specific needs.

3. The legal tech market is filling up with single-function offerings that generally perform a narrow function, so I want to understand all the ways your application might help replace or augment law firm tasks. Mention how your tech could be utilized in different practice areas where it’s best deployed (or where it could be deployed in the future in the case of features still in the development pipeline). The more capabilities an application has, the more attractive your prices begin to appear (and the fewer vendor roll-outs and training sessions I and my colleagues will have to sit through).

4. Don’t oversell capabilities. If you claim new features will be implemented soon, they shouldn’t take months to deploy. If your software is fast and easy, it had better be both, judged from an experienced attorney’s perspective. If your machine learning text classification models are not materially different than your competitors’, avoid saying they’re special or unique. On the other hand, if your application includes a demonstrable unique feature, highlight it and show how it makes a tangible difference compared to other available products in the market. Finally, if your product shouldn’t be used for high stakes work or has other limitations, I want to understand where that line should be drawn.

5. Speaking of over-selling, if I hear about an application’s performance characteristics, especially numerical values for things like accuracy, efficiency, and time saved, I want to see the benchmarks and protocols used to measure those characteristics.  While accuracy and other metrics are useful for distinguishing one product from another, they can be misleading. For example, a claim that a natural language processing model is 95% accurate at classifying text by topic should be backed up with comparisons to a benchmark and an explanation of the measurement protocol used.  A claim that a law firm was 40-60% more efficient using your legal tech, without providing details about how those figures were derived, isn’t all that compelling.

6. I want to know if your application has been adopted by top law firms, major in-house legal departments, courts, and attorneys general, but be prepared to provide data to back up claims.  Are those organizations paying a hefty annual subscription fee but only using the service a few times a month, or are your cloud servers overwhelmed by your user base? Monthly active users, API requests per domain, etc., can place usage figures in context.

7. I wish proof-of-concept testing was easier.  It’s hard enough to get law firm lawyers and paralegals interested in new legal tech, so provide a way to facilitate testing your product. For example, if you pitch an application for use in transactional due diligence, provide a set of common due diligence documents and walk through a realistic scenario. This may need to be done for different practice groups and functions at a firm, depending on the nature of the application.

8. I want to know how a legal tech vendor has addressed confidentiality, data security, and data assurance in instances where a vendor’s legal tech is a cloud-based service. If a machine learning model runs on a platform that is not behind the firm’s firewall and intrusion detection systems, that’s a potential problem in terms of safeguarding client confidential information. While vendors need to coordinate first with a firm’s CSO about data assurance/security, I also want to know the details.

9. I wish vendors would provide better information demonstrating how their applications helped others develop business. For example, tell me if using your application helped a law firm respond to a Request for Proposal (RFP) and won, or a client gave more work to a firm that demonstrated advanced legal tech acumen.  While such information may merely be anecdotal, I can probably champion legal tech on the basis of business development even if a colleague isn’t persuaded with things like accuracy and efficiency.

10. Finally, a word about design.  I wish legal tech developers would place more emphasis on UI/UX. It seems some of the offerings of late appear ready for beta testing rather than a roll-out to prospective buyers. I’ve seen demos in which a vendor’s interface contained basic formatting errors, something any quality control process would have caught. Some UIs are bland and lack intuitiveness when they should be user-friendly and have a quality look and feel. Use a unique theme and graphics style, and adopt a brand that stands out. For legal tech to succeed in the market, technology and design both must meet expectations.

[The views and opinions expressed in this post are solely the author’s and do not necessarily represent or reflect the views or opinions of the author’s employer or colleagues.]

Industry Focus: The Rise of Data-Driven Health Tech Innovation

Artificial intelligence-based healthcare technologies have contributed to improved drug discoveries, tumor identification, diagnosis, risk assessments, electronic health records (EHR), and mental health tools, among others. Thanks in large part to AI and the availability of health-related data, health tech is one of the fastest growing segments of healthcare and one of the reasons why the sector ranks highest on many lists.

According to a 2016 workforce study by Georgetown University, the healthcare industry experienced the largest employment growth among all industries since December 2007, netting 2.3 million jobs (about an 8% increase). Fourteen percent of all US workers work in healthcare, making it the country’s largest employment center. According to the latest government figures, the US spends the most on healthcare per person ($10,348) than any other country. In fact, healthcare spending is nearly 18 percent of the US gross domestic product (GDP), a figure that is expected to increase. The healthcare IT segment is expected to grow at a CAGR greater than 10% through 2019. The number of US patents issued in 2017 for AI-infused healthcare-related inventions rose more than 40% compared to 2016.

Investment in health tech has led to the development of some impressive AI-based tools. Researchers at a major university medical center, for example, invented a way to use AI to identify from open source data the emergence of health-related events around the world. The machine learning system they’d created extracted useful information and classified it according to disease-specific taxonomies. At the time of its development ten years ago, the “supervised” and “unsupervised” natural language processing models were leaps ahead of what others were using at the time and earned the inventors national recognition. More recently, medical researchers have created a myriad of new technologies from innovative uses of machine learning technologies.

What most of the above and other health tech innovations today have in common is what drives much of the health tech sector: lots of data. Big data sets, especially labeled data, are needed by AI technologists to train and test machine learning algorithms that produce models capable of “learning” what to look for in new data. And there is no better place to find big data sets than in the healthcare sector. According to an article last year in the New England Journal of Medicine, by 2012 as much as 30% of the world’s stored data was being generated in the healthcare industry.

Traditional healthcare companies are finding value in data-driven AI. Biopharmaceutical company Roche’s recent announcement that it is acquiring software firm Flatiron Health Inc. for $1.9 billion illustrates the value of being able to access health-related data. Flatiron, led by former Google employees, makes software for real-time acquisition and analysis of oncology-specific EHR data and other structured and unstructured hospital-generated data for diagnostic and research purposes. Roche plans to leverage Flatiron’s algorithms–and all of its data–to enhance Roche’s ability to personalize healthcare strategies by way of accelerating the development of new cancer treatments. In a world powered by AI, where data is key to building new products that attract new customers, Roche is now tapped into one of the largest sources of labeled data.

Companies not traditionally in healthcare are also seeing opportunities in health-related data. Google’s AI-focused research division, for example, recently reported in Nature a promising use of so-called deep learning algorithms (a computation network structured to mimic how neurons fire in the brain) to make cardiovascular risk predictions from retinal image data. After training their model, Google scientists said they were able to identify and quantify risk factors in retinal images and generate patient-specific risk predictions.

The growth of available healthcare data and the infusion of AI health tech in the healthcare industry will challenge companies to evolve. Health tech holds the promise of better and more efficient research, manufacturing, and distribution of healthcare products and services, though some have also raised concerns about who will benefit most from these advances, bias in data sets, anonymizing data for privacy reasons, and other legal issues that go beyond healthcare, issues that will need to be addressed.

To be successful, tomorrow’s healthcare leaders may be those who have access to data that drives innovation in the health tech segment. This may explain why, according to a recent survey, healthcare CIOs whose companies plan spending increases in 2018 indicated that their investments will likely be directed first toward AI and related technologies.

Evaluating and Valuing an AI Business: Don’t Forget the IP

After record-breaking funding and deals involving artificial intelligence startups in 2017, it may be tempting to invest in the next AI business or business idea without a close look beyond a company’s data, products, user-base, and talent. Indeed, big tech companies seem willing to acquire, and investors seem happy to invest in, AI startups even before the founders have built anything. Defensible business valuations, however, involve many more factors, all of which need careful consideration during early planning of a new AI business or investing in one. One factor that should never be overlooked is a company’s actual or potential intellectual property rights underpinning its products.

Last year, Andrew Ng (of Coursera and Stanford; formerly Baidu and Google Brain) spoke about a Data-Product-Users model for evaluating whether an AI business is “defensible.” In this model, data holds a prominent position because information extracted from data drives development of products, which involve algorithms and networks trained using the data. Products in turn attract users who engage with the products and generate even more data.

While an AI startup’s data, and its ability to accumulate data, will remain a key valuation factor for investors, excellent products and product ideas are crucial for long-term data generation and growth. Thus, for an AI business to be defensible in today’s hot AI market, its products, more than its data, need to be defensible. One way to accomplish that is through patents.

It can be a challenge, though, to obtain patents for certain AI technologies. That’s partly due to application stack developers and network architects relying on open source software and in-licensed third-party hardware tools with known utilities. Publicly-disclosing information about products too early, and publishing novel problem-solutions related to their development, including describing algorithms and networks and their performance and accuracy, also can hinder a company’s ability to protect product-specific IP rights around the world. US federal court decisions and US Patent and Trademark Office proceedings can also be obstacles to obtaining and defending software-related patents (as discussed here). Even so, seeking patents (as well as carefully conceived brands and associated trademarks for products) is one of the best options for demonstrating to potential investors that a company’s products or product ideas are defensible and can survive in a competitive market.

Patents of course are not just important for AI startups, but also for established tech companies that acquire startups. IBM, for example, reportedly obtained or acquired about 1,400 patents in artificial intelligence in 2017. Amazon, Cisco, Google, and Microsoft were also among the top companies receiving machine learning patents in 2017 (as discussed here).

Patents may never generate direct revenues for an AI business like a company’s products can (unless a company can find willing licensees for its patents). But protecting the IP aspects of a product’s core technology can pay dividends in other ways, and thus adds value. So when brainstorming ideas for your company’s next AI product or considering possible investment targets involving AI technologies, don’t forget to consider whether the idea or investment opportunity has any IP associated with the AI.

Recognizing Individual Rights: A Step Toward Regulating Artificial Intelligence Technologies

In the movie Marjorie | Prime (August 2017), John Hamm plays an artificial intelligence version of Marjorie’s deceased husband, visible to Marjorie as a holographic projection in her beachfront home. As Marjorie (played by Lois Smith) interacts with Hamm’s Prime through a series of one-on-one conversations, the AI improves its cognition by observing and processing Marjorie’s emotional expressions, movements, and speech. The AI also learns from interactions with Marjorie’s son-in-law (Tim Robbins) and daughter (Geena Davis), as they recount highly personal and painful episodes of their lives. Through these interactions, Prime ends up possessing a collective knowledge greater and more personal and intimate than Marjorie’s original husband ever had.

Although not directly explored in the movie’s arc, the futuristic story touches on an important present-day debate about the fate of private personal data being uploaded to commercial and government AI systems, data that theoretically could persist in a memory device long after the end of the human lives from which the data originated, for as long as its owner chooses to keep it. It also raises questions about the fate of knowledge collected by other technologies perceiving other people’s lives, and to what extent these percepts, combined with people’s demographic, psychographic, and behavioristic characteristics, would be used to create sharply detailed personality profiles that companies and governments might abuse.

These are not entirely hypothetical issues to be addressed years down the road. Companies today provide the ability to create digital doppelgangers, or human digital twins, using AI technologies. And collecting personal information from people on a daily basis as they interact with digital assistants and other connected devices is not new. But as Marjorie|Prime and several non-cinematic AI technologies available today illustrate, AI systems allow the companies who build them unprecedented means for receiving, processing, storing, and taking actions based on some of the most personal information about people, including information about their present, past, and trending or future emotional states, which marketers for years have been suggesting are the keys to optimizing advertising content.

Congress recently acknowledged that “AI technologies are rapidly evolving in capability and application throughout society,” but the US currently has no federal policy towards AI and no part of the federal government has ownership of the advancement of AI technologies. Left unchecked in an unregulated market, as is largely the case today, AI technological advancements may trend in a direction that may be inconsistent with collective values and goals.

Identifying individual rights

One of the first questions those tasked with developing laws, regulations, and policies directed toward AI should ask is, what are the basic individual rights–rights that arise in the course of people interacting with AI technologies–that should be recognized? Answering that question will be key to ensuring that enacted laws and promulgated regulations achieve one of Congress’s recently stated goals: ensuring AI technologies benefit society. Answering that question now will be key to ensuring that policymakers have the necessary foundation in front of them and will not be unduly swayed by influential stakeholders as they take up the task of deciding how and/or when to regulate AI technologies.

Identify individual rights leads to their recognition, which leads to basic legal protections, whether in the form of legislation or regulation, or, initially, as common law from judges deciding if and how to remedy a harm to a person or property caused by an AI system. Fortunately, identifying individual rights is not a formidable task. The belief that people have a right to be let alone in their private lives, for example, established the basic premise for privacy laws in the US. Those same concerns about intrusion into personal lives ought to be among the first considerations by those tasked with formulating and developing AI legislation and regulations. The notion that people have a right to be let alone has led to the identification of other individual rights that could protect people in their interactions with AI systems. These include the right of transparency and explanation, the right of audit (with the objective to reveal bias, discrimination, and content filtering, and thus maintain accountability), the right to know when you are dealing with an AI system and not a human, and the right to be forgotten (that is, mandatory deletion of one’s personal data), among others.

Addressing individual rights, however, may not persuade everyone to trust AI systems, especially when AI creators cannot explain precisely the basis for certain actions taken by trained AI technologies. People want to trust that owners and developers of AI systems that use private personal data will employ the best safeguards to protect that data. Trust, but verify, may need to play a role in policy-making efforts even if policies appear to comprehensively address individual rights. Trust might be addressed by imposing specific reporting and disclosure requirements, such as those suggested by federal lawmakers in pending federal autonomous driving legislation.

In the end, however, laws and regulations developed with privacy and other individual rights in mind, that address data security and other concerns people have about trusting their data to AI companies, will invariably include gaps, omissions, and incomplete definitions. The result may be unregulated commercial AI systems, and AI businesses finding workarounds. In such instances, people may have limited options other than to fully opt out, or accept that individual AI technology developers’ work was motivated by ethical considerations and a desire to make something that benefits society. The pressure within many tech companies and startups to push new products out to the world every day, however, could make prioritizing ethical considerations a challenge. Many organizations focused on AI technologies, some of which are listed below, are working to make sure that doesn’t happen.

Rights, trust, and ethical considerations in commercial endeavors can get overshadowed by financial interests and the subjective interests and tastes of individuals. It doesn’t help that companies and policymakers may also feel that winning the race for AI dominance is a factor to be considered (which is not to say that such a consideration is antithetical to protecting individual rights). This underscores the need for thoughtful analysis, sooner rather than later, of the need for laws and regulations directed toward AI technologies.

To learn more about some of these issues, visit the websites of the following organizations, who are active in AI policy research: Access Now, AI Now, and Future of Life.

Legal Tech, Artificial Intelligence, and the Practice of Law in 2018

Due in part to a better understanding of available artificial intelligence legal tech tools, more lawyers will adopt and use AI technologies in 2018 than ever before. Better awareness will also drive creation and marketing of specialized legal practice areas within law firms focused on AI, more lawyers with AI expertise, new business opportunities across multiple practice groups, and the possibly of another round of Associate salary increases as the demand for AI talent both in-house and at law firms escalates in response to the continued expansion of AI in key industries.

The legal services industry is poised to adopt AI technologies at the highest level seen to date. But that doesn’t mean lawyers are currently unfamiliar with AI. In fact, AI technologies are widely used by legal practitioners, such as tech that power case law searches (websites services in which a user’s natural language search query is processed by a machine learning algorithm, and displays a ranked and sorted list of relevant cases), and that are used in electronic discovery of documents (predictive analytics software that finds and tags relevant electronic documents for production during a lawsuit based on a taxonomy of keywords and phrases agreed upon by the parties).

Newer AI-based software solutions, however, from companies like Kira and Ross, among dozens of others now available, may improve the legal services industry’s understanding of AI. These solutions offer increased efficiency, improved client service, and reduced operating costs. Efficiency, measured in terms of the time it takes to respond to client questions and the amount of billable hours expended, can translate into reduced operating costs for in-house counsel, law firm lawyers, judges, and their staffs, which is sure to get attention. AI-powered contract review software, for example, can take an agreement provided by opposing counsel and nearly instantaneously spot problems, a process that used to take an Associate or Partner a half-hour or more to accomplish, depending on the contract’s complexity. In-house counsel are wary of paying biglaw hourly rates for such mundane review work, so software that can perform some of the work seems like a perfect solution. The law firms and their lawyers that become comfortable using the latest AI-powered legal tech will be able to boast of being cutting edge and client-focused.

Lawyers and law firms with AI expertise are beginning to market AI capabilities on their websites to retain existing clients and capture new business, and this should increase in 2018. Firms are focusing efforts on industry segments most active in AI, such as tech, financial services (banks and financial technology companies or “fintech”), computer infrastructure (cloud services and chip makers), and other peripheral sectors, like those that make computer vision sensors and other devices for autonomous vehicles, robots, and consumer products, to name a few. Those same law firms are also looking at opportunities within the ever-expanding software as a service industry, which provides solutions for leveraging information from a company’s own data, such as human resources data, process data, quality assurance data, etc. Law practitioners who understand how these industries are using AI technologies, and AI’s limitations and potential biases, will have an edge when it comes to business development in the above-mentioned industry segments.

The impacts of AI on the legal industry in 2018 may also be reflected in law firm headcounts and salaries. Some reports suggest that the spread of AI legal tech could lead to a decrease in lawyer ranks, though most agree this will happen slowly and over several years.

At the same time, however, the increased attention directed at AI technologies by law firm lawyers and in-house counsel in 2018 may put pressure on law firms to adjust upward Associate salaries, like many did during the dot-com era when demand for new and mid-level lawyers equipped to handle cash-infused Silicon Valley startups’ IPO, intellectual property, and contract issues skyrocketed. A possible Associate salary spike in 2018 may also be a consequence of, and fueled by, huge salaries reportedly being paid in the tech sector, where big tech companies spent billions in 2016 and 2017 acquiring AI start-ups to add talent to their rosters. A recent report suggests annual salary and other incentives in the range of $350,000 to $500,000 being paid for newly-minted PhDs and to those with just a few years of AI experience. At those levels, recent college graduates contemplating law school and a future in the legal profession might opt instead to head to graduate school for a Masters or PhD in an AI field.

Congress Takes Aim at the FUTURE of Artificial Intelligence

As the calendar turns over to 2018, artificial intelligence system developers will need to keep an eye on first of its kind legislation being considered in Congress. The “Fundamentally Understanding The Usability and Realistic Evolution of Artificial Intelligence Act of 2017,” or FUTURE of AI Act, is Congress’s first major step toward comprehensive regulation of the AI tech sector.

Introduced on December 22, 2017, companion bills S.2217 and H.R.4625 touch on a host of AI issues, their stated purposes mirroring concerns raised by many about possible problems facing society as AI technologies becomes ubiquitous. The bills propose to establish a federal advisory committee charged with reporting to the Secretary of Commerce on many of today’s hot button, industry-disrupting AI issues.

Definitions

Leaving the definition of “artificial intelligence” open for later modification, both bills take a broad brush at defining, inclusively, what an AI system is, what artificial general intelligence (AGI) means, and what are “narrow” AI systems, which presumably would each be treated differently under future laws and implementing regulations.

Under both measures, AI is generally defined as “artificial systems that perform tasks under varying and unpredictable circumstances, without significant human oversight, or that can learn from their experience and improve their performance,” and encompass systems that “solve tasks requiring human-like perception, cognition, planning, learning, communication, or physical action.” According to the bills’ sponsors, the more “human-like the system within the context of its tasks, the more it can be said to use artificial intelligence.”

While those definitions and descriptions include plenty of ambiguity, characteristic of early legislative efforts, the bills also provide several clarifying examples: AI involves technologies that think like humans, such as cognitive architectures and neural networks; those that act like humans, such as systems that can pass the Turing test or other comparable test via natural language processing, knowledge representation, automated reasoning, and learning; those using sets of techniques, including machine learning, that seek to approximate some cognitive task; and AI technologies that act rationally, such as intelligent software agents and embodied robots that achieve goals via perception, planning, reasoning, learning, communicating, decision making, and acting.

The bills describe AGI as “a notional future AI system exhibiting apparently intelligent behavior at least as advanced as a person across the range of cognitive, emotional, and social behaviors,” which is generally consistent with how many others view the concept of an AGI system.

So-called narrow AI is viewed as an AI system that addresses specific application areas such as playing strategic games, language translation, self-driving vehicles, and image recognition. Plenty of other AI technologies today employ what the sponsors define as narrow AI.

The FUTURE of AI Committee

Both the House and Senate versions would establish a FUTURE of AI advisory committee made up of government and private-sector members tasked with evaluating and reporting on AI issues.

The bills emphasize that the committee should consider accountability and legal rights issues, including identifying where responsibility lies for violations of laws by an AI system, and assessing the compatibility of international regulations involving privacy rights of individuals who are or will be affected by technological innovation relating to AI. The committee will evaluate whether advancements in AI technologies have or will outpace the legal and regulatory regimes implemented to protect consumers, and how existing laws, including those concerning data access and privacy (as discussed here), should be modernized to enable the potential of AI.

The committee will study workforce impacts, including whether and how networked, automated, AI applications and robotic devices will displace or create jobs and how any job-related gains from AI can be maximized. The committee will also evaluate the role ethical issues should take in AI development, including whether and how to incorporate ethical standards in the development and implementation of AI, as suggested by groups such as IEEE’s Global Initiative on Ethics of Autonomous and Intelligent Systems.

The committee will consider issues of machine learning bias through core cultural and societal norms, including how bias can be identified and eliminated in the development of AI and in the algorithms that support AI technologies. The committee will focus on evaluating the selection and processing of data used to train AI, diversity in the development of AI, the ways and places the systems are deployed and the potential harmful outcomes, and how ongoing dialogues and consultations with multi-stakeholder groups can maximize the potential of AI and further development of AI technologies that can benefit everyone inclusively.

The FUTURE of AI committee will also consider issues of competitiveness of the United States, such as how to create a climate for public and private sector investment and innovation in AI, and the possible benefits and effects that the development of AI may have on the economy, workforce, and competitiveness of the United States. The committee will be charged with reviewing AI-related education; open sharing of data and the open sharing of research on AI; international cooperation and competitiveness; opportunities for AI in rural communities (that is, how the Federal Government can encourage technological progress in implementation of AI that benefits the full spectrum of social and economic classes); and government efficiency (that is, how the Federal Government utilizes AI to handle large or complex data sets, how the development of AI can affect cost savings and streamline operations in various areas of government operations, including health care, cybersecurity, infrastructure, and disaster recovery).

Non-profits like AI Now and Future of Life, among others, are also considering many of the same issues. And while those groups primarily rely on private funding, the FUTURE of AI advisory committee will be funded through Congressional appropriations or through contributions “otherwise made available to the Secretary of Commerce,” which may include donation from private persons and non-federal entities that have a stake in AI technology development. The bills limit private donations to less than or equal to 50% of the committee’s total funding from all sources.

The bills’ sponsors says that AI’s evolution can greatly benefit society by powering the information economy, fostering better informed decisions, and helping unlock answers to questions that are presently unanswerable. Their sentiment that fostering the development of AI should be done in a way that maximizes AI’s benefit to society provides a worthy goal for the FUTURE of AI advisory committee’s work. But it also suggests how AI companies may wish to approach AI technology development efforts, especially in the interim period before future legislation becomes law.

How Privacy Law’s Beginnings May Suggest An Approach For Regulating Artificial Intelligence

A survey conducted in April 2017 by Morning Consult suggests most Americans are in favor of regulating artificial intelligence technologies. Of 2,200 American adults surveyed, 71% said they strongly or somewhat agreed that there should be national regulation of AI, while only 14% strongly or somewhat disagreed (15% did not express a view).

Technology and business leaders speaking out on whether to regulate AI fall into one of two camps: those who generally favor an ex post, case-by-case, common law approach, and those who prefer establishing a statutory and regulatory framework that, ex ante, sets forth clear do’s and don’ts and penalties for violations. (If you’re interested in learning about the challenges of ex post and ex ante approaches to regulation, check out Matt Scherer’s excellent article, “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies,” published in the Harvard Journal of Law and Technology (2016)).

Advocates for a proactive regulatory approach caution that the alternative is fraught with predictable danger. Elon Musk for one, notes that, “[b]y the time we’re reactive in A.I., regulation’s too late.” Others, including leaders of some of the biggest AI technology companies in the industry, backed by lobbying organizations like the Information Technology Industry Council (ITI), feel that the hype surrounding AI does not justify quick Congressional action at this time.

Musk criticized this wait-and-see approach. “Normally, the way regulation’s set up,” he said, “a whole bunch of bad things happen, there’s a public outcry, and then after many years, a regulatory agency is set up to regulate that industry. There’s a bunch of opposition from companies who don’t like being told what to do by regulators, and it takes forever. That in the past has been bad but not something which represented a fundamental risk to the existence of civilization.”

Assuming AI regulation is inevitable, how should regulators (and legislators) approach such a formidable task? After all, AI technologies come in many forms, and their uses extend across multiple industries, including some already burdened with regulation. The history of privacy law may provide the answer.

Without question, privacy concerns, and privacy laws, touch on AI technology use and development. That’s because so much of today’s human-machine interactions involving AI are powered by user-provided or user-mined data. Search histories, images people appear in on social media, purchasing habits, home ownership details, political affiliations, and many other data points are well-known to marketers and others whose products and services rely on characterizing potential customers using, for example, machine learning algorithms, convolutional neural networks, and other AI tools. In the field of affective computing, human-robot and human-chatbot interactions are driven by a person’s voice, facial features, heart rate, and other physiological features, which are the percepts that the AI system collects, processes, stores, and uses when deciding actions to take, such as responding to user queries.

Privacy laws evolved from a period during late nineteenth century America when journalists were unrestrained in publishing sensational pieces for newspapers or magazines, basically the “fake news” of the time. This Yellow Journalism, as it was called, prompted legal scholars to express a view that people had a “right to be let alone,” setting in motion the development of a new body of law involving privacy. The key to regulating AI, as it was in the development of regulations governing privacy, may be the recognition of a specific personal right that is, or is expected to be, infringed by AI systems.

In the case of privacy, attorneys Samuel Warren and Louis Brandeis (later, Justice Brandeis) were the first to articulate a personal privacy right. In The Right of Privacy, published in the Harvard Law Review in 1890, Warren and Brandeis observed that “the press is overstepping in every direction the obvious bounds of propriety and of decency. Gossip…has become a trade.” They contended that “for years there has been a feeling that the law must afford some remedy for the unauthorized circulation of portraits of private persons.” They argued that a right of privacy was entitled to recognition because “in every [] case the individual is entitled to decide whether that which is his shall be given to the public.” A violation of the person’s right of privacy, they wrote, should be actionable.

Soon after, courts began recognizing the right of privacy in civil cases. By 1960, in his seminal review article entitled Privacy (48 Cal.L.Rev 383), William Prosser wrote, “In one form or another,” the right of privacy “was declared to exist by the overwhelming majority of the American courts.” That led to uniform standards. Some states enacted limited or sweeping state-specific statutes, replacing the common law with statutory provisions and penalties. Federal appeals courts weighed in when conflicts between state law arose. This slow progression from initial recognition of a personal privacy right in 1890, to today’s modern statutes and expansive development of common law, won’t appeal to those pushing for regulation of AI now.

Even so, the process has to begin somewhere, and it could very well start with an assessment of the personal rights that should be recognized arising from interactions with or the use of AI technologies. Already, personal rights recognized by courts and embodied in statutes apply to AI technologies. But there is one personal right, potentially unique to AI technologies, that has been suggested: the right to know why (or how) an AI technology took a particular action (or made a decision) affecting a person.

Take, for example, an adverse credit decision by a bank that relies on machine learning algorithms to decide whether a customer should be given credit. Should that customer have the right to know why (or how) the system made the credit-worthiness decision? FastCompany writer Cliff Kuang explored this proposition in his recent article, “Can A.I. Be Taught to Explain Itself?” published in the New York Times (November 21, 2017).

If AI could explain itself, the banking customer might want to ask it what kind of training data was used and whether the data was biased, or whether there was an errant line of python coding to blame, or whether the AI gave the appropriate weight to the customer’s credit history. Given the nature of AI technologies, some of these questions, and even more general ones, may only be answered by opening the AI black box. But even then it may be impossible to pinpoint how the AI technology made its decision. In Europe, “tell me why/how” regulations are expected to become effective in May 2018. As I will discuss in a future post, many practical obstacles face those wishing to build a statute or regulatory framework around the right of consumers to demand from businesses that their AI explain why it made or took a particular adverse action.

Regulation of AI will likely happen. In fact, we are already seeing the beginning of direct legislative/regulatory efforts aimed at the autonomous driving industry. Whether interest in expanding those efforts to other AI technologies grows or lags may depend at least in part on whether people believe they have personal rights at stake in AI, and whether those rights are being protected by current laws and regulations.