The AI Summit New York City: Takeaways For the Legal Profession

This week, business, technology, and academic thought leaders in Artificial Intelligence are gathered at The AI Summit in New York City, one of the premier international conferences offered for AI professionals. Below, I consider two of the three takeaways from Summit Day 1, published yesterday by AI Business, from the perspective of lawyers looking for opportunities in the burgeoning AI market.

“1. The tech landscape is changing fast – with big implications for businesses”

If a year from now your law practice has not fielded at least one query from a client about AI technologies, you are probably going out of your way to avoid the subject. It is almost universally accepted that AI technologies in one form or another will impact nearly every industry. Based on recently-published salary data, the industries most active in AI are tech (think Facebook, Amazon, Alphabet, Microsoft, Netflix, and many others), financial services (banks and financial technology companies or “fintech”), and computer infrastructure (Amazon, Nvidia, Intel, IBM, and many others; in areas such as chips for growing computational speed and throughput, and cloud computing for big data storage needs).

Of course, other industries are also seeing plenty of AI development. The automotive industry, for example, has already begun adopting machine learning, computer vision, and other AI technologies for autonomous vehicles. The robotics and chatbot industries have seen great strides lately, both in terms of humanoid robotic development, and consumer-machine interaction products such as stationary and mobile digital assistants (e.g., personal robotic assistants, as well as utility devices like autonomous vacuums). And of course the software as a service industry, which leverages information from a company’s own data, such as human resources data, process data, healthcare data, etc., seems to offers new software solutions to improve efficiencies every day.

All of this will translate into consumer adoption of specific AI technologies, which is reported to already be at 10% and growing. The fast pace of technology development and adoption may translate into new business opportunities for lawyers, especially for those who invest time to learning about AI technologies. After all, as in any area of law, understanding the challenges facing clients is essential for developing appropriate legal strategies, as well as for targeting business development resources.

“2. AI is a disruptive force today, not tomorrow – and business must adapt”

Adapt or be left behind is a cautionary tale, but one with plenty of evidence demonstrating that it holds true in many situations.

Lawyers and law firms as an institution are generally slow to change, often because things that disrupt the status quo are viewed through a cautionary lens. This is not surprising, given that a lawyer’s work often involves thoughtful spotting of potential risks, and finding ways to address those risks. A fast-changing business landscape racing to keep up with the latest in AI technologies may be seen as inherently risky, especially in the absence of targeted laws and regulations providing guidance, as is the case today in the AI industry. Even so, exploring how to adapt one’s law practice to a world filled with AI technologies should be near the top of every lawyer’s list of things to consider for 2018.

How Privacy Law’s Beginnings May Suggest An Approach For Regulating Artificial Intelligence

A survey conducted in April 2017 by Morning Consult suggests most Americans are in favor of regulating artificial intelligence technologies. Of 2,200 American adults surveyed, 71% said they strongly or somewhat agreed that there should be national regulation of AI, while only 14% strongly or somewhat disagreed (15% did not express a view).

Technology and business leaders speaking out on whether to regulate AI fall into one of two camps: those who generally favor an ex post, case-by-case, common law approach, and those who prefer establishing a statutory and regulatory framework that, ex ante, sets forth clear do’s and don’ts and penalties for violations. (If you’re interested in learning about the challenges of ex post and ex ante approaches to regulation, check out Matt Scherer’s excellent article, “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies,” published in the Harvard Journal of Law and Technology (2016)).

Advocates for a proactive regulatory approach caution that the alternative is fraught with predictable danger. Elon Musk for one, notes that, “[b]y the time we’re reactive in A.I., regulation’s too late.” Others, including leaders of some of the biggest AI technology companies in the industry, backed by lobbying organizations like the Information Technology Industry Council (ITI), feel that the hype surrounding AI does not justify quick Congressional action at this time.

Musk criticized this wait-and-see approach. “Normally, the way regulation’s set up,” he said, “a whole bunch of bad things happen, there’s a public outcry, and then after many years, a regulatory agency is set up to regulate that industry. There’s a bunch of opposition from companies who don’t like being told what to do by regulators, and it takes forever. That in the past has been bad but not something which represented a fundamental risk to the existence of civilization.”

Assuming AI regulation is inevitable, how should regulators (and legislators) approach such a formidable task? After all, AI technologies come in many forms, and their uses extend across multiple industries, including some already burdened with regulation. The history of privacy law may provide the answer.

Without question, privacy concerns, and privacy laws, touch on AI technology use and development. That’s because so much of today’s human-machine interactions involving AI are powered by user-provided or user-mined data. Search histories, images people appear in on social media, purchasing habits, home ownership details, political affiliations, and many other data points are well-known to marketers and others whose products and services rely on characterizing potential customers using, for example, machine learning algorithms, convolutional neural networks, and other AI tools. In the field of affective computing, human-robot and human-chatbot interactions are driven by a person’s voice, facial features, heart rate, and other physiological features, which are the percepts that the AI system collects, processes, stores, and uses when deciding actions to take, such as responding to user queries.

Privacy laws evolved from a period during late nineteenth century America when journalists were unrestrained in publishing sensational pieces for newspapers or magazines, basically the “fake news” of the time. This Yellow Journalism, as it was called, prompted legal scholars to express a view that people had a “right to be let alone,” setting in motion the development of a new body of law involving privacy. The key to regulating AI, as it was in the development of regulations governing privacy, may be the recognition of a specific personal right that is, or is expected to be, infringed by AI systems.

In the case of privacy, attorneys Samuel Warren and Louis Brandeis (later, Justice Brandeis) were the first to articulate a personal privacy right. In The Right of Privacy, published in the Harvard Law Review in 1890, Warren and Brandeis observed that “the press is overstepping in every direction the obvious bounds of propriety and of decency. Gossip…has become a trade.” They contended that “for years there has been a feeling that the law must afford some remedy for the unauthorized circulation of portraits of private persons.” They argued that a right of privacy was entitled to recognition because “in every [] case the individual is entitled to decide whether that which is his shall be given to the public.” A violation of the person’s right of privacy, they wrote, should be actionable.

Soon after, courts began recognizing the right of privacy in civil cases. By 1960, in his seminal review article entitled Privacy (48 Cal.L.Rev 383), William Prosser wrote, “In one form or another,” the right of privacy “was declared to exist by the overwhelming majority of the American courts.” That led to uniform standards. Some states enacted limited or sweeping state-specific statutes, replacing the common law with statutory provisions and penalties. Federal appeals courts weighed in when conflicts between state law arose. This slow progression from initial recognition of a personal privacy right in 1890, to today’s modern statutes and expansive development of common law, won’t appeal to those pushing for regulation of AI now.

Even so, the process has to begin somewhere, and it could very well start with an assessment of the personal rights that should be recognized arising from interactions with or the use of AI technologies. Already, personal rights recognized by courts and embodied in statutes apply to AI technologies. But there is one personal right, potentially unique to AI technologies, that has been suggested: the right to know why (or how) an AI technology took a particular action (or made a decision) affecting a person.

Take, for example, an adverse credit decision by a bank that relies on machine learning algorithms to decide whether a customer should be given credit. Should that customer have the right to know why (or how) the system made the credit-worthiness decision? FastCompany writer Cliff Kuang explored this proposition in his recent article, “Can A.I. Be Taught to Explain Itself?” published in the New York Times (November 21, 2017).

If AI could explain itself, the banking customer might want to ask it what kind of training data was used and whether the data was biased, or whether there was an errant line of python coding to blame, or whether the AI gave the appropriate weight to the customer’s credit history. Given the nature of AI technologies, some of these questions, and even more general ones, may only be answered by opening the AI black box. But even then it may be impossible to pinpoint how the AI technology made its decision. In Europe, “tell me why/how” regulations are expected to become effective in May 2018. As I will discuss in a future post, many practical obstacles face those wishing to build a statute or regulatory framework around the right of consumers to demand from businesses that their AI explain why it made or took a particular adverse action.

Regulation of AI will likely happen. In fact, we are already seeing the beginning of direct legislative/regulatory efforts aimed at the autonomous driving industry. Whether interest in expanding those efforts to other AI technologies grows or lags may depend at least in part on whether people believe they have personal rights at stake in AI, and whether those rights are being protected by current laws and regulations.

Artificial Intelligence Won’t Achieve Legal Inventorship Status Anytime Soon

Imagine a deposition in which an inventor is questioned about her conception and reduction to practice of an invention directed to a chemical product worth billions of dollars to her company. Testimony reveals how artificial intelligence software, assessing huge amounts of data, identified the patented compound and the compound’s new uses in helping combat disease. The inventor states that she simply performed tests confirming the compound’s qualities and its utility, which the software had already determined. The attorney taking the deposition moves to invalidate the patent on the basis that the patent does not identify the true inventor. The true inventor, the attorney argues, was the company’s AI software.

Seem farfetched? Maybe not in today’s AI world. AI tools can spot cancer and other problems in diagnostic images, as well as identify patient-specific treatments. AI software can identify workable drug combinations for effectively combating pests. AI can predict biological events emerging in hotspots on the other side of the world, even before they’re reported by local media and officials. And lawyers are becoming more aware of AI through use of machine learning tools to predict the relevance of case law, answer queries about how a judge might respond to a particular set of facts, and assess the strength of contracts, among other tools. So while the above deposition scenario is hypothetical, it seems far from unrealistic.

One thing is for sure, however; an AI program will not be named as an inventor or joint inventor on a patent any time soon. At least not until Congress amends US patent laws to broaden the definition of “inventor” and the Supreme Court clarifies what “conception” of an invention means in a world filled with artificially-intelligent technologies.

That’s because US patent laws are intended to protect the natural intellectual output of humans, not the artificial intelligence of algorithms. Indeed, Congress left little wiggle room when it defined “inventor” to mean an “individual,” or in the case of a joint invention, the “individuals” collectively who invent or discover the subject matter of an invention. And the Supreme Court has endorsed a human-centric notion of inventorship. This has led courts overseeing patent disputes to repeatedly remind us that “conception” is the touchstone of inventorship, where conception is defined as the “formation in the mind of the inventor, of a definite and permanent idea of the complete and operative invention, as it is hereafter to be applied in practice.”

But consider this. What if “in the mind of” were struck from the definition of “conception” and inventorship? Under that revised definition, an AI system might indeed be viewed as conceiving an invention.

By way of example, let’s say the same AI software and the researcher from the above deposition scenario were participants behind the partition in a classic Turing Test. Would an interrogator be able to distinguish the AI inventor from the natural intelligence inventor if the test for conception of the chemical compound invention is reduced to examining whether the chemical compound idea was “definite” (not vague), “permanent” (fixed), “complete,” “operative” (it works as conceived), and has a practical application (real world utility)? If you were the interrogator in this Turing Test, would you choose the AI software or the researcher who did the follow-up confirmatory testing?

Those who follow patent law may see the irony of legally recognizing AI software as an “inventor” if it “conceives” an invention, when the very same software would likely face an uphill battle being patented by its developers because of the apparent “abstract” nature of many software algorithms.

In any case, for now the question of whether inventorship and inventions should be assessed based on their natural or artificial origin may merely be an academic one. But that may need to change when artificial intelligence development produces artificial general intelligence (AGI) that is capable of performing the same intellectual tasks that a human can.

Marketing “Artificial Intelligence” Needs Careful Planning to Avoid Trademark Troubles

As the market for all things artificial intelligence continues heating up, companies are looking for ways to align their products, services, and entire brands with “artificial intelligence” designations and phrases common in the surging artificial intelligence industry, including variants such as “AI,” “deep,” “neural,” and others. Reminiscent of the dot.com era of the early 2000’s, when companies rushed to market with “i-” or “e-” prefixes or appended “.com” names, today’s artificial intelligence startups are finding traction with artificial intelligence-related terms and corresponding “.AI” domains. The proliferation of AI marketing, however, may lead to brand and domain disputes. But a carefully-planned intellectual property strategy may help avoid potential risks down the road, as the recent case Stella.ai, Inc. v. Stellar A.I, Inc., filed in the U.S. District Court for the Northern District of California, demonstrates.

Artificial Intelligence startups face plenty of challenges getting their businesses up and going. The last things they want to worry about is unexpected trademark litigation involving their “AI” brand and domain names. Fortunately, some practical steps taken early may help reduce the risk of such problems.

According to court filings, New York City-based Stella.AI, Inc., provider of a jobs matching website, claims that its “stella.AI” website domain has been in use since March 2016, and its STELLA trademark since February 2016 (its U.S. federal trademark applications was reportedly published for opposition in April 2016 by the US Patent and Trademark Office). Palo Alto-based talent and employment agency Stellar A.I., formerly JobGenie, obtained its “stellar.ai” domain and sought trademark status for STELLAR.AI in January 2017, a move, Stella.ai claims, was prompted after JobGenie learned of Stella.AI, Inc.’s domain. Stella.AI’s complaint alleges unfair competition and false designation of origin due to a confusingly-similar mark and domain name. It sought monetary damages and the transfer of the stellar.ai domain.

In its answer to the complaint, Stellar A.I. says that it created, used, and marketed its services under the STELLAR.AI mark in good faith without prior knowledge of Stella.AI, Inc.’s mark, and in any case, any infringement of the STELLA mark was unintentional.

Artificial Intelligence startups face plenty of challenges getting their businesses up and going. The last things they want to worry about is unexpected trademark litigation involving their “AI” brand and domain names. Fortunately, some practical steps taken early may help reduce the risk of such problems.

As a start, marketers should consider thoroughly searching for conflicting federal, state, and common law uses of a planned company, product, or service name, and they should also consider evaluating corresponding domains as part of an early branding strategy. Trademark searches often reveal other, potentially confusingly-similar, uses of a trademark. Plenty of search firms offer search services, and they will return a list of trademarks that might present problems. If you want to conduct your own search, a good place to start might be the US Patent and Trademark Office’s TESS database, which can be searched to identify federal trademark registrations and pending trademark applications. Evaluating the search results should be done with the assistance of the company’s intellectual property attorney.

It is also good practice to look beyond obtaining a single top-level domain for a company and its brands. For example, if “xyzco.ai” is in play as a possible company “AI” domain name, also consider “xyzco.com” and others top-level domains to prevent someone else from getting their hands on your name. Moreover, consider obtaining domains embodying possible shortcuts and misspellings that prospective customers might use (i.e., “xzyco.ai” transposes two letters).

Marketers would be wise to also exercise caution when using competitor’s marks on their company website, although making legitimate comparisons between competing products remains fair use even when the competing products are identified using their trademarks. In such situation, comparisons should clearly state that the marketer’s product is not affiliated with its competitor’s product, and website links to competitor’s products should be avoided.

While startups often focus limited resources on protecting their technology by filing patent applications (or by implementing a comprehensive trade secret policy), a startup’s intellectual property strategy should also consider trademark issues to avoid having to re-brand down the road, as Stellar A.I. did (their new name and domain are now “Stellares” and “stellares.ai,” respectively).

Do Artificial Intelligence Technologies Need Regulating?

At some point, yes. But when? And how?

Today, AI is largely unregulated by federal and state governments. That may change as technologies incorporating AI continue to expand into communications, education, healthcare, law, law enforcement, manufacturing, transportation, and other industries, and prominent scientists as well as lawmakers continue raising concerns about unchecked AI.

The only Congressional proposals directly aimed at AI technologies so far have been limited to regulating Highly Autonomous Vehicles (HAVs, or self-driving cars). In developing those proposals, the House Energy and Commerce Committee brought stakeholders to the table in June 2017 to offer their input. In other areas of AI development, however, technologies are reportedly being developed without the input of those whose knowledge and experience might provide acceptable and appropriate direction.

Tim Hwang, an early adopter of AI technology in the legal industry, says individual artificial intelligence researchers are “basically writing policy in code” that reflects personal perspectives or biases. Kate Darling, the co-founder of AI Now and an intellectual property attorney, speaking with Wired magazine, assessed the problem this way: “Who gets a seat at the table in the design of these systems? At the moment, it’s driven by engineering and computer science experts who are designing systems that touch everything from criminal justice to healthcare to education. But in the same way that we wouldn’t expect a federal judge to optimize a neural network, we shouldn’t be expecting an engineer to understand the workings of the criminal justice system.”

“Who gets a seat at the table in the design of these systems? At the moment, it’s driven by engineering and computer science experts who are designing systems that touch everything from criminal justice to healthcare to education. But in the same way that we wouldn’t expect a federal judge to optimize a neural network, we shouldn’t be expecting an engineer to understand the workings of the criminal justice system.”

Those concerns frame part of the debate over regulating the AI industry, but timing is another big question. Shivon Zilis, fund investor at Bloomberg Beta, cautions that AI technology is here and will become a very powerful technology, so the public discussion of regulation needs to happen now. Others, like Alphabet chairman Eric Schmidt, considers the government regulation debate premature.

A fundamental challenge for Congress and government regulators is how to regulate AI. As AI technologies advance from the simple to the super-intelligent, a one size fits all regulatory approach could cause more problems than it addresses. On the one end of the AI technology spectrum, simple AI systems may need little regulatory oversight. But on the other end of the spectrum, super-intelligent autonomous systems may be viewed as having rights, and thus a focused set of regulations may be more appropriate. The Information Technology Industry Council (ITI), a lobbying group, “encourage[s] governments to evaluate existing policy tools and use caution before adopting new laws, regulations, or taxes that may inadvertently or unnecessarily impede the responsible development and use of AI.”

Regulating the AI industry will require careful thought and planning. Government regulations are hard to get right, and they rarely please everyone. Regulate too much and economic activity can be stifled. Regulate too little (or not at all) and the consequences could be worse. Congress and regulators will also need to assess the impacts of AI-specific regulations on an affected industry years and decades down the road, a difficult task when market trends and societal acceptance of AI will likely alter the trajectory of the AI industry in possibly unforeseen ways.

But we may be getting ahead of ourselves. Kate Darling recently noted that stakeholders have not yet agreed on basic definitions for AI. For example, there is not even a universally-accepted definition today for what is a “robot.”

Sources:
June 2017 House Energy and Commerce Committee, Hearings on Self-Driving Cars

Wired Magazine: Why AI is Still Waiting for its Ethics Transplant

TechCrunch

Futurism

Gizmodo

Federal Circuit: AI, IoT, and Robotics in “Danger” Due to Uncertainty Surrounding Patent Abstraction Test

In Purepredictive, Inc. v. H2O.ai, Inc., the U.S. District Court for the Northern District of California (J. Orrick) granted Mountain View-based H2O.ai’s motion to dismiss a patent infringement complaint. In doing so, the court found that the claims of asserted U.S. patent 8,880,446 were invalid on the grounds that they “are directed to the abstract concept of the manipulation of mathematical functions and make use of computers only as tools, rather than provide a specific improvement on a computer-related technology.”

Decisions like this hardly make news these days, what with the frequency by which software patents are being invalidated by district courts across the country following the Supreme Court’s 2014 Alice Corp. Pty Ltd. v. CLS Bank decision. Perhaps that is why the U.S. Court of Appeals for the Federal Circuit, the specialized appeals court for patent cases based in Washington, DC, chose a recent case to publicly acknowledge that “great uncertainty yet remains” concerning Alice’s patent-eligibility test, despite the large number of post-Alice cases that have “attempted to provide practical guidance.”  Calling the uncertainty “dangerous” for some of today’s “most important inventions in computing” (specifically identifying medical diagnostics, artificial intelligence (AI), the Internet of Things (IoT), and robotics), the Federal Circuit expressed concern that perhaps Alice has gone too far, a belief shared by others, especially smaller technology companies whose value is tied to their software intellectual property.

Utah-based Purepredictive says its ‘446 patent involves “AI driving machine learning ensembling.” The district court characterized the patent as being directed to a software method that performs “predictive analytics” in three steps. In the method’s first step, the court said, it receives data and generates “learned functions,” or, for example, regressions from that data. Second, it evaluates the effectiveness of those learned functions at making accurate predictions based on the test data. Finally, it selects the most effective learned functions and creates a rule set for additional data input. This method, the district court found, is merely “directed to a mental process” performed by a computer, and “the abstract concept of using mathematical algorithms to perform predictive analytics” by collecting and analyzing information.

Alice critics have long pointed to the subjective nature of Alice’s patent-eligibility test. Under Alice, for subject matter of a patent claim to be patent eligible under 35 U.S.C. § 101, it may not be “directed to” a patent-ineligible concept, i.e., a law of nature, natural phenomenon, or abstract idea. If it is, however, it may nevertheless be patentable subject matter if the particular elements of the claim, considered both individually and as an ordered combination, add enough to transform the nature of the claim into a patent-eligible application. This two-part test has led to the invalidation of many software patents as “abstract,” and presents an obstacle for inventors of new software tools seeking patent protection for their inventions.

In the Purepredictive case, the district court found that the claim’s method “are mathematical processes that not only could be performed by humans but also go to the general abstract concept of predictive analytics rather than any specific application.” The “could be performed by humans” query would seem problematic for many software-based patent claims, including those directed to AI algorithms, despite the recognition that humans could never perform the same feat as many AI algorithms in a lifetime due to the enormous domain space these algorithms are tasked with evaluating.

In any event, while Alice’s abstract test will continue to pose challenges to those seeking patents, time will tell whether it will have “dangerous” impacts on the burgeoning AI, IoT, and robotics industries suggested by the Federal Circuit.

Sources:

Purepredictive, Inc. v. H2O.AI, Inc., slip op., No. 17-cv-03049-WHO (N.D. Cal. Aug. 29, 2017).

Smart Systems Innovations, LLC v. Chicago Transit Authority, slip. op. No. 2016-1233 (Fed. Cir. Oct. 18, 2017) (citing Alice Corp. Pty Ltd. v. CLS Bank, 134 S. Ct. 2347, 2354-55 (2014)).

Inaugural Post – AI Tech and the Law

Welcome. I am excited to present the first of what I hope will be many useful and timely posts covering issues arising at the crossroads of artificial intelligence technology and the law. My goal with this blog is to provide insightful discussion concerning the legal issues expected to affect individuals and businesses as they develop and interact with AI products and services. I also hope to engage with AI thought leaders in the legal industry as new AI technology-specific issues emerge. Join me by sharing your thoughts about AI and the law. If you’d like to see a particular issue discussed on these pages, I invite you to send me an email.

Much has already been written about the promises of AI and its ever-increasing role in daily life. AI technologies are unquestionably making their presence known in many impactful ways. Three billion smartphones in use worldwide, and many of them use one form of AI or another. Voice assistants driven by AI are appearing on kitchen countertops everywhere. Online search engines, powered by AI, deliver your search results. Select like/love/dislike/thumbs-down on your music streaming or news aggregating apps empowers AI algorithms to make recommendations for you.

Today’s tremendous AI industry expansion, driven by big data and enhanced computational power, will continue at an unprecedented rate in the future. We are seeing investors fund AI-focused startups across the globe. As Marc Cuban predicted earlier this year, the world’s first trillionaire will be an AI entrepreneur.

Not everyone, however, shares the same positive outlook concerning AI. Elon Musk, Bill Gates, Stephen Hawking and others have raised concerns. Many foresee problems arising as AI becomes ubiquitous, especially if businesses are left to develop AI systems without guidance. The media have written about displaced employees due to autonomous systems; bias, social justice, and civil rights concerns in big data; AI consumer product liability; privacy and data security; superintelligent systems, and other issues. Some have even predicted dire consequences from unchecked AI.

But with all the talk about AI–both positive and negative–businesses are operating in a vacuum of laws, regulations, and court opinions dealing directly with AI. Indeed, with only a few exceptions, most businesses today have little in the way of legal guidance about acceptable practices when it comes to developing and deploying their AI systems. While some advocate for a common law approach to dealing with AI problems on a case-by-case basis, others would like to see a more structured regulatory framework.

I look forward to considering these and others issues in the months to come.

Brian Higgins