When It’s Your Data But Another’s Stack, Who Owns The Trained AI Model?

Cloud-based machine learning algorithms, made available as a service, have opened up the world of artificial intelligence to companies without the resources to organically develop their own AI models. Tech companies that provide these services promise to help companies extract insights from the company’s unique customer, employee, product, business process, and other data, and to use those insights to improve decisions, recommendations, and predictions without the company having an army of data scientists and full stack developers. Simply open an account, provide data to the service’s algorithms, train and test an algorithm, and then incorporate the final model into the company’s toolbox.

While it seems reasonable to assume a company owns a model it develops with its own data–even one based on an algorithm residing on another’s platform–the practice across the industry is not universal. Why this matters is simple: a company’s model (characterized in part by model parameters, network architecture, and architecture-specific hyperparameters associated with the model) may provide the company with an advantage over competitors. For instance, the company may have unique and proprietary data that its competitors do not have. If a company wants to extract the most value from its data, it should take steps to not only protect its valuable data, but also the models created based on that data.

How does a company know if it has not given away any rights to its own data uploaded to another’s cloud server, and that it owns the models it created based on its data? Conversely, how can a company confirm the cloud-based machine learning service has not reserved any rights to the model and data for its own use? The answer, of course, is likely embedded in multiple terms of service, privacy, and user license agreements that apply to the use of the service. If important provisions are missing, vague, or otherwise unfavorable, a company may want to look at alternative cloud-based platforms.

Consider the following example. Suppose a company wants to develop an AI model to improve an internal production process, one the company has enhanced over the years and that gives it a competitive advantage over others. Maybe its unique data set derives from a trade secret process or reflects expertise that its competitors could not easily replicate. With data in hand, the company enters into an agreement with a cloud-based machine learning service, uploads its data, and builds a unique model from the service’s many AI technologies, such as natural language processing (NLP), computer vision classifiers, and supervised learning tools. Once the best algorithms are selected, the data is used to train them and a model is created. The model can then be used in the company’s operations to improve efficiency and cut costs.

Now let us assume the cloud service provider’s terms of service (TOS) states something like the following hypothetical:

“This agreement does not impliedly or otherwise grant either party any rights in or to the other’s content, or in or to any of the other’s trade secret or rights under intellectual property laws. The parties acknowledge and agree that Company owns all of its existing and future intellectual property and other rights in and concerning its data, the applications or models Company creates using the services, and Company’s project information provided as part of using the service, and Service owns all of its existing and future intellectual property and other rights in and to the services and software downloaded by Company to access the services. Service will not access nor use Company’s data, except as necessary to provide the services to Company.”

These terms would appear to generally protect certain of the company’s rights and interest in its data and any models created using the company’s data, and further the terms indicate the machine learning service will not use the company’s data and the model trained using the data, except to provide the service. That last part–the exception–needs careful attention, because how a company defines the services it performs can be stated broadly.

Now consider the following additional hypothetical TOS:

“Company acknowledges that Service may access Company’s data submitted to the service for the purpose of developing and improving the service, and any other of Service’s current, future, similar, or related services, and Company agrees to grant Service, its licensees, affiliates, assigns, and agents an irrevocable, perpetual right and permission to use Company’s data, because without those rights and permission Service cannot provide or offer the services to Company.”

The company may not be comfortable agreeing to those terms, unless the terms are superseded with other, more favorable terms in another applicable agreement related to using the cloud-based service.

So while AI may be “the new electricity” powering large portions of the tech sector today, data is an important commodity all on its own, and so are the models behind an AI company’s products. So don’t forget to review the fine print before uploading company data to a cloud-based machine learning service.

Evaluating and Valuing an AI Business: Don’t Forget the IP

After record-breaking funding and deals involving artificial intelligence startups in 2017, it may be tempting to invest in the next AI business or business idea without a close look beyond a company’s data, products, user-base, and talent. Indeed, big tech companies seem willing to acquire, and investors seem happy to invest in, AI startups even before the founders have built anything. Defensible business valuations, however, involve many more factors, all of which need careful consideration during early planning of a new AI business or investing in one. One factor that should never be overlooked is a company’s actual or potential intellectual property rights underpinning its products.

Last year, Andrew Ng (of Coursera and Stanford; formerly Baidu and Google Brain) spoke about a Data-Product-Users model for evaluating whether an AI business is “defensible.” In this model, data holds a prominent position because information extracted from data drives development of products, which involve algorithms and networks trained using the data. Products in turn attract users who engage with the products and generate even more data.

While an AI startup’s data, and its ability to accumulate data, will remain a key valuation factor for investors, excellent products and product ideas are crucial for long-term data generation and growth. Thus, for an AI business to be defensible in today’s hot AI market, its products, more than its data, need to be defensible. One way to accomplish that is through patents.

It can be a challenge, though, to obtain patents for certain AI technologies. That’s partly due to application stack developers and network architects relying on open source software and in-licensed third-party hardware tools with known utilities. Publicly-disclosing information about products too early, and publishing novel problem-solutions related to their development, including describing algorithms and networks and their performance and accuracy, also can hinder a company’s ability to protect product-specific IP rights around the world. US federal court decisions and US Patent and Trademark Office proceedings can also be obstacles to obtaining and defending software-related patents (as discussed here). Even so, seeking patents (as well as carefully conceived brands and associated trademarks for products) is one of the best options for demonstrating to potential investors that a company’s products or product ideas are defensible and can survive in a competitive market.

Patents of course are not just important for AI startups, but also for established tech companies that acquire startups. IBM, for example, reportedly obtained or acquired about 1,400 patents in artificial intelligence in 2017. Amazon, Cisco, Google, and Microsoft were also among the top companies receiving machine learning patents in 2017 (as discussed here).

Patents may never generate direct revenues for an AI business like a company’s products can (unless a company can find willing licensees for its patents). But protecting the IP aspects of a product’s core technology can pay dividends in other ways, and thus adds value. So when brainstorming ideas for your company’s next AI product or considering possible investment targets involving AI technologies, don’t forget to consider whether the idea or investment opportunity has any IP associated with the AI.

Recognizing Individual Rights: A Step Toward Regulating Artificial Intelligence Technologies

In the movie Marjorie | Prime (August 2017), John Hamm plays an artificial intelligence version of Marjorie’s deceased husband, visible to Marjorie as a holographic projection in her beachfront home. As Marjorie (played by Lois Smith) interacts with Hamm’s Prime through a series of one-on-one conversations, the AI improves its cognition by observing and processing Marjorie’s emotional expressions, movements, and speech. The AI also learns from interactions with Marjorie’s son-in-law (Tim Robbins) and daughter (Geena Davis), as they recount highly personal and painful episodes of their lives. Through these interactions, Prime ends up possessing a collective knowledge greater and more personal and intimate than Marjorie’s original husband ever had.

Although not directly explored in the movie’s arc, the futuristic story touches on an important present-day debate about the fate of private personal data being uploaded to commercial and government AI systems, data that theoretically could persist in a memory device long after the end of the human lives from which the data originated, for as long as its owner chooses to keep it. It also raises questions about the fate of knowledge collected by other technologies perceiving other people’s lives, and to what extent these percepts, combined with people’s demographic, psychographic, and behavioristic characteristics, would be used to create sharply detailed personality profiles that companies and governments might abuse.

These are not entirely hypothetical issues to be addressed years down the road. Companies today provide the ability to create digital doppelgangers, or human digital twins, using AI technologies. And collecting personal information from people on a daily basis as they interact with digital assistants and other connected devices is not new. But as Marjorie|Prime and several non-cinematic AI technologies available today illustrate, AI systems allow the companies who build them unprecedented means for receiving, processing, storing, and taking actions based on some of the most personal information about people, including information about their present, past, and trending or future emotional states, which marketers for years have been suggesting are the keys to optimizing advertising content.

Congress recently acknowledged that “AI technologies are rapidly evolving in capability and application throughout society,” but the US currently has no federal policy towards AI and no part of the federal government has ownership of the advancement of AI technologies. Left unchecked in an unregulated market, as is largely the case today, AI technological advancements may trend in a direction that may be inconsistent with collective values and goals.

Identifying individual rights

One of the first questions those tasked with developing laws, regulations, and policies directed toward AI should ask is, what are the basic individual rights–rights that arise in the course of people interacting with AI technologies–that should be recognized? Answering that question will be key to ensuring that enacted laws and promulgated regulations achieve one of Congress’s recently stated goals: ensuring AI technologies benefit society. Answering that question now will be key to ensuring that policymakers have the necessary foundation in front of them and will not be unduly swayed by influential stakeholders as they take up the task of deciding how and/or when to regulate AI technologies.

Identify individual rights leads to their recognition, which leads to basic legal protections, whether in the form of legislation or regulation, or, initially, as common law from judges deciding if and how to remedy a harm to a person or property caused by an AI system. Fortunately, identifying individual rights is not a formidable task. The belief that people have a right to be let alone in their private lives, for example, established the basic premise for privacy laws in the US. Those same concerns about intrusion into personal lives ought to be among the first considerations by those tasked with formulating and developing AI legislation and regulations. The notion that people have a right to be let alone has led to the identification of other individual rights that could protect people in their interactions with AI systems. These include the right of transparency and explanation, the right of audit (with the objective to reveal bias, discrimination, and content filtering, and thus maintain accountability), the right to know when you are dealing with an AI system and not a human, and the right to be forgotten (that is, mandatory deletion of one’s personal data), among others.

Addressing individual rights, however, may not persuade everyone to trust AI systems, especially when AI creators cannot explain precisely the basis for certain actions taken by trained AI technologies. People want to trust that owners and developers of AI systems that use private personal data will employ the best safeguards to protect that data. Trust, but verify, may need to play a role in policy-making efforts even if policies appear to comprehensively address individual rights. Trust might be addressed by imposing specific reporting and disclosure requirements, such as those suggested by federal lawmakers in pending federal autonomous driving legislation.

In the end, however, laws and regulations developed with privacy and other individual rights in mind, that address data security and other concerns people have about trusting their data to AI companies, will invariably include gaps, omissions, and incomplete definitions. The result may be unregulated commercial AI systems, and AI businesses finding workarounds. In such instances, people may have limited options other than to fully opt out, or accept that individual AI technology developers’ work was motivated by ethical considerations and a desire to make something that benefits society. The pressure within many tech companies and startups to push new products out to the world every day, however, could make prioritizing ethical considerations a challenge. Many organizations focused on AI technologies, some of which are listed below, are working to make sure that doesn’t happen.

Rights, trust, and ethical considerations in commercial endeavors can get overshadowed by financial interests and the subjective interests and tastes of individuals. It doesn’t help that companies and policymakers may also feel that winning the race for AI dominance is a factor to be considered (which is not to say that such a consideration is antithetical to protecting individual rights). This underscores the need for thoughtful analysis, sooner rather than later, of the need for laws and regulations directed toward AI technologies.

To learn more about some of these issues, visit the websites of the following organizations, who are active in AI policy research: Access Now, AI Now, and Future of Life.

Patenting Artificial Intelligence: Innovation Spike Follows Broader Market Trend

If you received a US patent for a machine learning invention recently, count yourself among a record number of innovators named on artificial intelligence technology patents issued in 2017. There’s also good chance you worked for one of the top companies earning patents for machine learning, neural network, and other AI technologies, namely IBM, Amazon, Cisco, Google, and Microsoft, according to public patent records (available through mid-December). This year’s increase in the number of issued patents reflects similar record increases in the level of investment dollars flowing to AI start-ups and the number of AI tech sector M&A deals in 2017.

As the chart indicates, US patents directed to “machine learning” jumped over 20% in 2017 compared to 2016, and that follows an even larger estimated 38% annual increase between 2015 and 2016. Even discounting the patents that merely mention machine learning in passing, the numbers are still quite impressive, especially given the US Supreme Court’s 2014 Alice Corp. Pty Ltd. v. CLS Bank decision, which led to the invalidation of many software and business method patents and likely also put the brakes on software-related patent application filings (as explained here) beginning in 2014. So the recent jump in issued patents for “machine learning,” “artificial intelligence,” and “neural network” inventions suggests that specific applications of those technologies remain patentable despite Alice.

A jump in issued patents in a highly competitive, increasingly crowded market segment, could lead to an uptick in patent-related infringement. Already, estimates by some suggest that 35% more companies expect to face IP litigation in 2018 compared to 2017.

How Privacy Law’s Beginnings May Suggest An Approach For Regulating Artificial Intelligence

A survey conducted in April 2017 by Morning Consult suggests most Americans are in favor of regulating artificial intelligence technologies. Of 2,200 American adults surveyed, 71% said they strongly or somewhat agreed that there should be national regulation of AI, while only 14% strongly or somewhat disagreed (15% did not express a view).

Technology and business leaders speaking out on whether to regulate AI fall into one of two camps: those who generally favor an ex post, case-by-case, common law approach, and those who prefer establishing a statutory and regulatory framework that, ex ante, sets forth clear do’s and don’ts and penalties for violations. (If you’re interested in learning about the challenges of ex post and ex ante approaches to regulation, check out Matt Scherer’s excellent article, “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies,” published in the Harvard Journal of Law and Technology (2016)).

Advocates for a proactive regulatory approach caution that the alternative is fraught with predictable danger. Elon Musk for one, notes that, “[b]y the time we’re reactive in A.I., regulation’s too late.” Others, including leaders of some of the biggest AI technology companies in the industry, backed by lobbying organizations like the Information Technology Industry Council (ITI), feel that the hype surrounding AI does not justify quick Congressional action at this time.

Musk criticized this wait-and-see approach. “Normally, the way regulation’s set up,” he said, “a whole bunch of bad things happen, there’s a public outcry, and then after many years, a regulatory agency is set up to regulate that industry. There’s a bunch of opposition from companies who don’t like being told what to do by regulators, and it takes forever. That in the past has been bad but not something which represented a fundamental risk to the existence of civilization.”

Assuming AI regulation is inevitable, how should regulators (and legislators) approach such a formidable task? After all, AI technologies come in many forms, and their uses extend across multiple industries, including some already burdened with regulation. The history of privacy law may provide the answer.

Without question, privacy concerns, and privacy laws, touch on AI technology use and development. That’s because so much of today’s human-machine interactions involving AI are powered by user-provided or user-mined data. Search histories, images people appear in on social media, purchasing habits, home ownership details, political affiliations, and many other data points are well-known to marketers and others whose products and services rely on characterizing potential customers using, for example, machine learning algorithms, convolutional neural networks, and other AI tools. In the field of affective computing, human-robot and human-chatbot interactions are driven by a person’s voice, facial features, heart rate, and other physiological features, which are the percepts that the AI system collects, processes, stores, and uses when deciding actions to take, such as responding to user queries.

Privacy laws evolved from a period during late nineteenth century America when journalists were unrestrained in publishing sensational pieces for newspapers or magazines, basically the “fake news” of the time. This Yellow Journalism, as it was called, prompted legal scholars to express a view that people had a “right to be let alone,” setting in motion the development of a new body of law involving privacy. The key to regulating AI, as it was in the development of regulations governing privacy, may be the recognition of a specific personal right that is, or is expected to be, infringed by AI systems.

In the case of privacy, attorneys Samuel Warren and Louis Brandeis (later, Justice Brandeis) were the first to articulate a personal privacy right. In The Right of Privacy, published in the Harvard Law Review in 1890, Warren and Brandeis observed that “the press is overstepping in every direction the obvious bounds of propriety and of decency. Gossip…has become a trade.” They contended that “for years there has been a feeling that the law must afford some remedy for the unauthorized circulation of portraits of private persons.” They argued that a right of privacy was entitled to recognition because “in every [] case the individual is entitled to decide whether that which is his shall be given to the public.” A violation of the person’s right of privacy, they wrote, should be actionable.

Soon after, courts began recognizing the right of privacy in civil cases. By 1960, in his seminal review article entitled Privacy (48 Cal.L.Rev 383), William Prosser wrote, “In one form or another,” the right of privacy “was declared to exist by the overwhelming majority of the American courts.” That led to uniform standards. Some states enacted limited or sweeping state-specific statutes, replacing the common law with statutory provisions and penalties. Federal appeals courts weighed in when conflicts between state law arose. This slow progression from initial recognition of a personal privacy right in 1890, to today’s modern statutes and expansive development of common law, won’t appeal to those pushing for regulation of AI now.

Even so, the process has to begin somewhere, and it could very well start with an assessment of the personal rights that should be recognized arising from interactions with or the use of AI technologies. Already, personal rights recognized by courts and embodied in statutes apply to AI technologies. But there is one personal right, potentially unique to AI technologies, that has been suggested: the right to know why (or how) an AI technology took a particular action (or made a decision) affecting a person.

Take, for example, an adverse credit decision by a bank that relies on machine learning algorithms to decide whether a customer should be given credit. Should that customer have the right to know why (or how) the system made the credit-worthiness decision? FastCompany writer Cliff Kuang explored this proposition in his recent article, “Can A.I. Be Taught to Explain Itself?” published in the New York Times (November 21, 2017).

If AI could explain itself, the banking customer might want to ask it what kind of training data was used and whether the data was biased, or whether there was an errant line of python coding to blame, or whether the AI gave the appropriate weight to the customer’s credit history. Given the nature of AI technologies, some of these questions, and even more general ones, may only be answered by opening the AI black box. But even then it may be impossible to pinpoint how the AI technology made its decision. In Europe, “tell me why/how” regulations are expected to become effective in May 2018. As I will discuss in a future post, many practical obstacles face those wishing to build a statute or regulatory framework around the right of consumers to demand from businesses that their AI explain why it made or took a particular adverse action.

Regulation of AI will likely happen. In fact, we are already seeing the beginning of direct legislative/regulatory efforts aimed at the autonomous driving industry. Whether interest in expanding those efforts to other AI technologies grows or lags may depend at least in part on whether people believe they have personal rights at stake in AI, and whether those rights are being protected by current laws and regulations.

Inaugural Post – AI Tech and the Law

Welcome. I am excited to present the first of what I hope will be many useful and timely posts covering issues arising at the crossroads of artificial intelligence technology and the law. My goal with this blog is to provide insightful discussion concerning the legal issues expected to affect individuals and businesses as they develop and interact with AI products and services. I also hope to engage with AI thought leaders in the legal industry as new AI technology-specific issues emerge. Join me by sharing your thoughts about AI and the law. If you’d like to see a particular issue discussed on these pages, I invite you to send me an email.

Much has already been written about the promises of AI and its ever-increasing role in daily life. AI technologies are unquestionably making their presence known in many impactful ways. Three billion smartphones in use worldwide, and many of them use one form of AI or another. Voice assistants driven by AI are appearing on kitchen countertops everywhere. Online search engines, powered by AI, deliver your search results. Select like/love/dislike/thumbs-down on your music streaming or news aggregating apps empowers AI algorithms to make recommendations for you.

Today’s tremendous AI industry expansion, driven by big data and enhanced computational power, will continue at an unprecedented rate in the future. We are seeing investors fund AI-focused startups across the globe. As Marc Cuban predicted earlier this year, the world’s first trillionaire will be an AI entrepreneur.

Not everyone, however, shares the same positive outlook concerning AI. Elon Musk, Bill Gates, Stephen Hawking and others have raised concerns. Many foresee problems arising as AI becomes ubiquitous, especially if businesses are left to develop AI systems without guidance. The media have written about displaced employees due to autonomous systems; bias, social justice, and civil rights concerns in big data; AI consumer product liability; privacy and data security; superintelligent systems, and other issues. Some have even predicted dire consequences from unchecked AI.

But with all the talk about AI–both positive and negative–businesses are operating in a vacuum of laws, regulations, and court opinions dealing directly with AI. Indeed, with only a few exceptions, most businesses today have little in the way of legal guidance about acceptable practices when it comes to developing and deploying their AI systems. While some advocate for a common law approach to dealing with AI problems on a case-by-case basis, others would like to see a more structured regulatory framework.

I look forward to considering these and others issues in the months to come.

Brian Higgins