Industry Focus: The Rise of Data-Driven Health Tech Innovation

Artificial intelligence-based healthcare technologies have contributed to improved drug discoveries, tumor identification, diagnosis, risk assessments, electronic health records (EHR), and mental health tools, among others. Thanks in large part to AI and the availability of health-related data, health tech is one of the fastest growing segments of healthcare and one of the reasons why the sector ranks highest on many lists.

According to a 2016 workforce study by Georgetown University, the healthcare industry experienced the largest employment growth among all industries since December 2007, netting 2.3 million jobs (about an 8% increase). Fourteen percent of all US workers work in healthcare, making it the country’s largest employment center. According to the latest government figures, the US spends the most on healthcare per person ($10,348) than any other country. In fact, healthcare spending is nearly 18 percent of the US gross domestic product (GDP), a figure that is expected to increase. The healthcare IT segment is expected to grow at a CAGR greater than 10% through 2019. The number of US patents issued in 2017 for AI-infused healthcare-related inventions rose more than 40% compared to 2016.

Investment in health tech has led to the development of some impressive AI-based tools. Researchers at a major university medical center, for example, invented a way to use AI to identify from open source data the emergence of health-related events around the world. The machine learning system they’d created extracted useful information and classified it according to disease-specific taxonomies. At the time of its development ten years ago, the “supervised” and “unsupervised” natural language processing models were leaps ahead of what others were using at the time and earned the inventors national recognition. More recently, medical researchers have created a myriad of new technologies from innovative uses of machine learning technologies.

What most of the above and other health tech innovations today have in common is what drives much of the health tech sector: lots of data. Big data sets, especially labeled data, are needed by AI technologists to train and test machine learning algorithms that produce models capable of “learning” what to look for in new data. And there is no better place to find big data sets than in the healthcare sector. According to an article last year in the New England Journal of Medicine, by 2012 as much as 30% of the world’s stored data was being generated in the healthcare industry.

Traditional healthcare companies are finding value in data-driven AI. Biopharmaceutical company Roche’s recent announcement that it is acquiring software firm Flatiron Health Inc. for $1.9 billion illustrates the value of being able to access health-related data. Flatiron, led by former Google employees, makes software for real-time acquisition and analysis of oncology-specific EHR data and other structured and unstructured hospital-generated data for diagnostic and research purposes. Roche plans to leverage Flatiron’s algorithms–and all of its data–to enhance Roche’s ability to personalize healthcare strategies by way of accelerating the development of new cancer treatments. In a world powered by AI, where data is key to building new products that attract new customers, Roche is now tapped into one of the largest sources of labeled data.

Companies not traditionally in healthcare are also seeing opportunities in health-related data. Google’s AI-focused research division, for example, recently reported in Nature a promising use of so-called deep learning algorithms (a computation network structured to mimic how neurons fire in the brain) to make cardiovascular risk predictions from retinal image data. After training their model, Google scientists said they were able to identify and quantify risk factors in retinal images and generate patient-specific risk predictions.

The growth of available healthcare data and the infusion of AI health tech in the healthcare industry will challenge companies to evolve. Health tech holds the promise of better and more efficient research, manufacturing, and distribution of healthcare products and services, though some have also raised concerns about who will benefit most from these advances, bias in data sets, anonymizing data for privacy reasons, and other legal issues that go beyond healthcare, issues that will need to be addressed.

To be successful, tomorrow’s healthcare leaders may be those who have access to data that drives innovation in the health tech segment. This may explain why, according to a recent survey, healthcare CIOs whose companies plan spending increases in 2018 indicated that their investments will likely be directed first toward AI and related technologies.

Evaluating and Valuing an AI Business: Don’t Forget the IP

After record-breaking funding and deals involving artificial intelligence startups in 2017, it may be tempting to invest in the next AI business or business idea without a close look beyond a company’s data, products, user-base, and talent. Indeed, big tech companies seem willing to acquire, and investors seem happy to invest in, AI startups even before the founders have built anything. Defensible business valuations, however, involve many more factors, all of which need careful consideration during early planning of a new AI business or investing in one. One factor that should never be overlooked is a company’s actual or potential intellectual property rights underpinning its products.

Last year, Andrew Ng (of Coursera and Stanford; formerly Baidu and Google Brain) spoke about a Data-Product-Users model for evaluating whether an AI business is “defensible.” In this model, data holds a prominent position because information extracted from data drives development of products, which involve algorithms and networks trained using the data. Products in turn attract users who engage with the products and generate even more data.

While an AI startup’s data, and its ability to accumulate data, will remain a key valuation factor for investors, excellent products and product ideas are crucial for long-term data generation and growth. Thus, for an AI business to be defensible in today’s hot AI market, its products, more than its data, need to be defensible. One way to accomplish that is through patents.

It can be a challenge, though, to obtain patents for certain AI technologies. That’s partly due to application stack developers and network architects relying on open source software and in-licensed third-party hardware tools with known utilities. Publicly-disclosing information about products too early, and publishing novel problem-solutions related to their development, including describing algorithms and networks and their performance and accuracy, also can hinder a company’s ability to protect product-specific IP rights around the world. US federal court decisions and US Patent and Trademark Office proceedings can also be obstacles to obtaining and defending software-related patents (as discussed here). Even so, seeking patents (as well as carefully conceived brands and associated trademarks for products) is one of the best options for demonstrating to potential investors that a company’s products or product ideas are defensible and can survive in a competitive market.

Patents of course are not just important for AI startups, but also for established tech companies that acquire startups. IBM, for example, reportedly obtained or acquired about 1,400 patents in artificial intelligence in 2017. Amazon, Cisco, Google, and Microsoft were also among the top companies receiving machine learning patents in 2017 (as discussed here).

Patents may never generate direct revenues for an AI business like a company’s products can (unless a company can find willing licensees for its patents). But protecting the IP aspects of a product’s core technology can pay dividends in other ways, and thus adds value. So when brainstorming ideas for your company’s next AI product or considering possible investment targets involving AI technologies, don’t forget to consider whether the idea or investment opportunity has any IP associated with the AI.

Recognizing Individual Rights: A Step Toward Regulating Artificial Intelligence Technologies

In the movie Marjorie | Prime (August 2017), John Hamm plays an artificial intelligence version of Marjorie’s deceased husband, visible to Marjorie as a holographic projection in her beachfront home. As Marjorie (played by Lois Smith) interacts with Hamm’s Prime through a series of one-on-one conversations, the AI improves its cognition by observing and processing Marjorie’s emotional expressions, movements, and speech. The AI also learns from interactions with Marjorie’s son-in-law (Tim Robbins) and daughter (Geena Davis), as they recount highly personal and painful episodes of their lives. Through these interactions, Prime ends up possessing a collective knowledge greater and more personal and intimate than Marjorie’s original husband ever had.

Although not directly explored in the movie’s arc, the futuristic story touches on an important present-day debate about the fate of private personal data being uploaded to commercial and government AI systems, data that theoretically could persist in a memory device long after the end of the human lives from which the data originated, for as long as its owner chooses to keep it. It also raises questions about the fate of knowledge collected by other technologies perceiving other people’s lives, and to what extent these percepts, combined with people’s demographic, psychographic, and behavioristic characteristics, would be used to create sharply detailed personality profiles that companies and governments might abuse.

These are not entirely hypothetical issues to be addressed years down the road. Companies today provide the ability to create digital doppelgangers, or human digital twins, using AI technologies. And collecting personal information from people on a daily basis as they interact with digital assistants and other connected devices is not new. But as Marjorie|Prime and several non-cinematic AI technologies available today illustrate, AI systems allow the companies who build them unprecedented means for receiving, processing, storing, and taking actions based on some of the most personal information about people, including information about their present, past, and trending or future emotional states, which marketers for years have been suggesting are the keys to optimizing advertising content.

Congress recently acknowledged that “AI technologies are rapidly evolving in capability and application throughout society,” but the US currently has no federal policy towards AI and no part of the federal government has ownership of the advancement of AI technologies. Left unchecked in an unregulated market, as is largely the case today, AI technological advancements may trend in a direction that may be inconsistent with collective values and goals.

Identifying individual rights

One of the first questions those tasked with developing laws, regulations, and policies directed toward AI should ask is, what are the basic individual rights–rights that arise in the course of people interacting with AI technologies–that should be recognized? Answering that question will be key to ensuring that enacted laws and promulgated regulations achieve one of Congress’s recently stated goals: ensuring AI technologies benefit society. Answering that question now will be key to ensuring that policymakers have the necessary foundation in front of them and will not be unduly swayed by influential stakeholders as they take up the task of deciding how and/or when to regulate AI technologies.

Identify individual rights leads to their recognition, which leads to basic legal protections, whether in the form of legislation or regulation, or, initially, as common law from judges deciding if and how to remedy a harm to a person or property caused by an AI system. Fortunately, identifying individual rights is not a formidable task. The belief that people have a right to be let alone in their private lives, for example, established the basic premise for privacy laws in the US. Those same concerns about intrusion into personal lives ought to be among the first considerations by those tasked with formulating and developing AI legislation and regulations. The notion that people have a right to be let alone has led to the identification of other individual rights that could protect people in their interactions with AI systems. These include the right of transparency and explanation, the right of audit (with the objective to reveal bias, discrimination, and content filtering, and thus maintain accountability), the right to know when you are dealing with an AI system and not a human, and the right to be forgotten (that is, mandatory deletion of one’s personal data), among others.

Addressing individual rights, however, may not persuade everyone to trust AI systems, especially when AI creators cannot explain precisely the basis for certain actions taken by trained AI technologies. People want to trust that owners and developers of AI systems that use private personal data will employ the best safeguards to protect that data. Trust, but verify, may need to play a role in policy-making efforts even if policies appear to comprehensively address individual rights. Trust might be addressed by imposing specific reporting and disclosure requirements, such as those suggested by federal lawmakers in pending federal autonomous driving legislation.

In the end, however, laws and regulations developed with privacy and other individual rights in mind, that address data security and other concerns people have about trusting their data to AI companies, will invariably include gaps, omissions, and incomplete definitions. The result may be unregulated commercial AI systems, and AI businesses finding workarounds. In such instances, people may have limited options other than to fully opt out, or accept that individual AI technology developers’ work was motivated by ethical considerations and a desire to make something that benefits society. The pressure within many tech companies and startups to push new products out to the world every day, however, could make prioritizing ethical considerations a challenge. Many organizations focused on AI technologies, some of which are listed below, are working to make sure that doesn’t happen.

Rights, trust, and ethical considerations in commercial endeavors can get overshadowed by financial interests and the subjective interests and tastes of individuals. It doesn’t help that companies and policymakers may also feel that winning the race for AI dominance is a factor to be considered (which is not to say that such a consideration is antithetical to protecting individual rights). This underscores the need for thoughtful analysis, sooner rather than later, of the need for laws and regulations directed toward AI technologies.

To learn more about some of these issues, visit the websites of the following organizations, who are active in AI policy research: Access Now, AI Now, and Future of Life.

Legal Tech, Artificial Intelligence, and the Practice of Law in 2018

Due in part to a better understanding of available artificial intelligence legal tech tools, more lawyers will adopt and use AI technologies in 2018 than ever before. Better awareness will also drive creation and marketing of specialized legal practice areas within law firms focused on AI, more lawyers with AI expertise, new business opportunities across multiple practice groups, and the possibly of another round of Associate salary increases as the demand for AI talent both in-house and at law firms escalates in response to the continued expansion of AI in key industries.

The legal services industry is poised to adopt AI technologies at the highest level seen to date. But that doesn’t mean lawyers are currently unfamiliar with AI. In fact, AI technologies are widely used by legal practitioners, such as tech that power case law searches (websites services in which a user’s natural language search query is processed by a machine learning algorithm, and displays a ranked and sorted list of relevant cases), and that are used in electronic discovery of documents (predictive analytics software that finds and tags relevant electronic documents for production during a lawsuit based on a taxonomy of keywords and phrases agreed upon by the parties).

Newer AI-based software solutions, however, from companies like Kira and Ross, among dozens of others now available, may improve the legal services industry’s understanding of AI. These solutions offer increased efficiency, improved client service, and reduced operating costs. Efficiency, measured in terms of the time it takes to respond to client questions and the amount of billable hours expended, can translate into reduced operating costs for in-house counsel, law firm lawyers, judges, and their staffs, which is sure to get attention. AI-powered contract review software, for example, can take an agreement provided by opposing counsel and nearly instantaneously spot problems, a process that used to take an Associate or Partner a half-hour or more to accomplish, depending on the contract’s complexity. In-house counsel are wary of paying biglaw hourly rates for such mundane review work, so software that can perform some of the work seems like a perfect solution. The law firms and their lawyers that become comfortable using the latest AI-powered legal tech will be able to boast of being cutting edge and client-focused.

Lawyers and law firms with AI expertise are beginning to market AI capabilities on their websites to retain existing clients and capture new business, and this should increase in 2018. Firms are focusing efforts on industry segments most active in AI, such as tech, financial services (banks and financial technology companies or “fintech”), computer infrastructure (cloud services and chip makers), and other peripheral sectors, like those that make computer vision sensors and other devices for autonomous vehicles, robots, and consumer products, to name a few. Those same law firms are also looking at opportunities within the ever-expanding software as a service industry, which provides solutions for leveraging information from a company’s own data, such as human resources data, process data, quality assurance data, etc. Law practitioners who understand how these industries are using AI technologies, and AI’s limitations and potential biases, will have an edge when it comes to business development in the above-mentioned industry segments.

The impacts of AI on the legal industry in 2018 may also be reflected in law firm headcounts and salaries. Some reports suggest that the spread of AI legal tech could lead to a decrease in lawyer ranks, though most agree this will happen slowly and over several years.

At the same time, however, the increased attention directed at AI technologies by law firm lawyers and in-house counsel in 2018 may put pressure on law firms to adjust upward Associate salaries, like many did during the dot-com era when demand for new and mid-level lawyers equipped to handle cash-infused Silicon Valley startups’ IPO, intellectual property, and contract issues skyrocketed. A possible Associate salary spike in 2018 may also be a consequence of, and fueled by, huge salaries reportedly being paid in the tech sector, where big tech companies spent billions in 2016 and 2017 acquiring AI start-ups to add talent to their rosters. A recent report suggests annual salary and other incentives in the range of $350,000 to $500,000 being paid for newly-minted PhDs and to those with just a few years of AI experience. At those levels, recent college graduates contemplating law school and a future in the legal profession might opt instead to head to graduate school for a Masters or PhD in an AI field.

Congress Takes Aim at the FUTURE of Artificial Intelligence

As the calendar turns over to 2018, artificial intelligence system developers will need to keep an eye on first of its kind legislation being considered in Congress. The “Fundamentally Understanding The Usability and Realistic Evolution of Artificial Intelligence Act of 2017,” or FUTURE of AI Act, is Congress’s first major step toward comprehensive regulation of the AI tech sector.

Introduced on December 22, 2017, companion bills S.2217 and H.R.4625 touch on a host of AI issues, their stated purposes mirroring concerns raised by many about possible problems facing society as AI technologies becomes ubiquitous. The bills propose to establish a federal advisory committee charged with reporting to the Secretary of Commerce on many of today’s hot button, industry-disrupting AI issues.

Definitions

Leaving the definition of “artificial intelligence” open for later modification, both bills take a broad brush at defining, inclusively, what an AI system is, what artificial general intelligence (AGI) means, and what are “narrow” AI systems, which presumably would each be treated differently under future laws and implementing regulations.

Under both measures, AI is generally defined as “artificial systems that perform tasks under varying and unpredictable circumstances, without significant human oversight, or that can learn from their experience and improve their performance,” and encompass systems that “solve tasks requiring human-like perception, cognition, planning, learning, communication, or physical action.” According to the bills’ sponsors, the more “human-like the system within the context of its tasks, the more it can be said to use artificial intelligence.”

While those definitions and descriptions include plenty of ambiguity, characteristic of early legislative efforts, the bills also provide several clarifying examples: AI involves technologies that think like humans, such as cognitive architectures and neural networks; those that act like humans, such as systems that can pass the Turing test or other comparable test via natural language processing, knowledge representation, automated reasoning, and learning; those using sets of techniques, including machine learning, that seek to approximate some cognitive task; and AI technologies that act rationally, such as intelligent software agents and embodied robots that achieve goals via perception, planning, reasoning, learning, communicating, decision making, and acting.

The bills describe AGI as “a notional future AI system exhibiting apparently intelligent behavior at least as advanced as a person across the range of cognitive, emotional, and social behaviors,” which is generally consistent with how many others view the concept of an AGI system.

So-called narrow AI is viewed as an AI system that addresses specific application areas such as playing strategic games, language translation, self-driving vehicles, and image recognition. Plenty of other AI technologies today employ what the sponsors define as narrow AI.

The FUTURE of AI Committee

Both the House and Senate versions would establish a FUTURE of AI advisory committee made up of government and private-sector members tasked with evaluating and reporting on AI issues.

The bills emphasize that the committee should consider accountability and legal rights issues, including identifying where responsibility lies for violations of laws by an AI system, and assessing the compatibility of international regulations involving privacy rights of individuals who are or will be affected by technological innovation relating to AI. The committee will evaluate whether advancements in AI technologies have or will outpace the legal and regulatory regimes implemented to protect consumers, and how existing laws, including those concerning data access and privacy (as discussed here), should be modernized to enable the potential of AI.

The committee will study workforce impacts, including whether and how networked, automated, AI applications and robotic devices will displace or create jobs and how any job-related gains from AI can be maximized. The committee will also evaluate the role ethical issues should take in AI development, including whether and how to incorporate ethical standards in the development and implementation of AI, as suggested by groups such as IEEE’s Global Initiative on Ethics of Autonomous and Intelligent Systems.

The committee will consider issues of machine learning bias through core cultural and societal norms, including how bias can be identified and eliminated in the development of AI and in the algorithms that support AI technologies. The committee will focus on evaluating the selection and processing of data used to train AI, diversity in the development of AI, the ways and places the systems are deployed and the potential harmful outcomes, and how ongoing dialogues and consultations with multi-stakeholder groups can maximize the potential of AI and further development of AI technologies that can benefit everyone inclusively.

The FUTURE of AI committee will also consider issues of competitiveness of the United States, such as how to create a climate for public and private sector investment and innovation in AI, and the possible benefits and effects that the development of AI may have on the economy, workforce, and competitiveness of the United States. The committee will be charged with reviewing AI-related education; open sharing of data and the open sharing of research on AI; international cooperation and competitiveness; opportunities for AI in rural communities (that is, how the Federal Government can encourage technological progress in implementation of AI that benefits the full spectrum of social and economic classes); and government efficiency (that is, how the Federal Government utilizes AI to handle large or complex data sets, how the development of AI can affect cost savings and streamline operations in various areas of government operations, including health care, cybersecurity, infrastructure, and disaster recovery).

Non-profits like AI Now and Future of Life, among others, are also considering many of the same issues. And while those groups primarily rely on private funding, the FUTURE of AI advisory committee will be funded through Congressional appropriations or through contributions “otherwise made available to the Secretary of Commerce,” which may include donation from private persons and non-federal entities that have a stake in AI technology development. The bills limit private donations to less than or equal to 50% of the committee’s total funding from all sources.

The bills’ sponsors says that AI’s evolution can greatly benefit society by powering the information economy, fostering better informed decisions, and helping unlock answers to questions that are presently unanswerable. Their sentiment that fostering the development of AI should be done in a way that maximizes AI’s benefit to society provides a worthy goal for the FUTURE of AI advisory committee’s work. But it also suggests how AI companies may wish to approach AI technology development efforts, especially in the interim period before future legislation becomes law.

How Privacy Law’s Beginnings May Suggest An Approach For Regulating Artificial Intelligence

A survey conducted in April 2017 by Morning Consult suggests most Americans are in favor of regulating artificial intelligence technologies. Of 2,200 American adults surveyed, 71% said they strongly or somewhat agreed that there should be national regulation of AI, while only 14% strongly or somewhat disagreed (15% did not express a view).

Technology and business leaders speaking out on whether to regulate AI fall into one of two camps: those who generally favor an ex post, case-by-case, common law approach, and those who prefer establishing a statutory and regulatory framework that, ex ante, sets forth clear do’s and don’ts and penalties for violations. (If you’re interested in learning about the challenges of ex post and ex ante approaches to regulation, check out Matt Scherer’s excellent article, “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies,” published in the Harvard Journal of Law and Technology (2016)).

Advocates for a proactive regulatory approach caution that the alternative is fraught with predictable danger. Elon Musk for one, notes that, “[b]y the time we’re reactive in A.I., regulation’s too late.” Others, including leaders of some of the biggest AI technology companies in the industry, backed by lobbying organizations like the Information Technology Industry Council (ITI), feel that the hype surrounding AI does not justify quick Congressional action at this time.

Musk criticized this wait-and-see approach. “Normally, the way regulation’s set up,” he said, “a whole bunch of bad things happen, there’s a public outcry, and then after many years, a regulatory agency is set up to regulate that industry. There’s a bunch of opposition from companies who don’t like being told what to do by regulators, and it takes forever. That in the past has been bad but not something which represented a fundamental risk to the existence of civilization.”

Assuming AI regulation is inevitable, how should regulators (and legislators) approach such a formidable task? After all, AI technologies come in many forms, and their uses extend across multiple industries, including some already burdened with regulation. The history of privacy law may provide the answer.

Without question, privacy concerns, and privacy laws, touch on AI technology use and development. That’s because so much of today’s human-machine interactions involving AI are powered by user-provided or user-mined data. Search histories, images people appear in on social media, purchasing habits, home ownership details, political affiliations, and many other data points are well-known to marketers and others whose products and services rely on characterizing potential customers using, for example, machine learning algorithms, convolutional neural networks, and other AI tools. In the field of affective computing, human-robot and human-chatbot interactions are driven by a person’s voice, facial features, heart rate, and other physiological features, which are the percepts that the AI system collects, processes, stores, and uses when deciding actions to take, such as responding to user queries.

Privacy laws evolved from a period during late nineteenth century America when journalists were unrestrained in publishing sensational pieces for newspapers or magazines, basically the “fake news” of the time. This Yellow Journalism, as it was called, prompted legal scholars to express a view that people had a “right to be let alone,” setting in motion the development of a new body of law involving privacy. The key to regulating AI, as it was in the development of regulations governing privacy, may be the recognition of a specific personal right that is, or is expected to be, infringed by AI systems.

In the case of privacy, attorneys Samuel Warren and Louis Brandeis (later, Justice Brandeis) were the first to articulate a personal privacy right. In The Right of Privacy, published in the Harvard Law Review in 1890, Warren and Brandeis observed that “the press is overstepping in every direction the obvious bounds of propriety and of decency. Gossip…has become a trade.” They contended that “for years there has been a feeling that the law must afford some remedy for the unauthorized circulation of portraits of private persons.” They argued that a right of privacy was entitled to recognition because “in every [] case the individual is entitled to decide whether that which is his shall be given to the public.” A violation of the person’s right of privacy, they wrote, should be actionable.

Soon after, courts began recognizing the right of privacy in civil cases. By 1960, in his seminal review article entitled Privacy (48 Cal.L.Rev 383), William Prosser wrote, “In one form or another,” the right of privacy “was declared to exist by the overwhelming majority of the American courts.” That led to uniform standards. Some states enacted limited or sweeping state-specific statutes, replacing the common law with statutory provisions and penalties. Federal appeals courts weighed in when conflicts between state law arose. This slow progression from initial recognition of a personal privacy right in 1890, to today’s modern statutes and expansive development of common law, won’t appeal to those pushing for regulation of AI now.

Even so, the process has to begin somewhere, and it could very well start with an assessment of the personal rights that should be recognized arising from interactions with or the use of AI technologies. Already, personal rights recognized by courts and embodied in statutes apply to AI technologies. But there is one personal right, potentially unique to AI technologies, that has been suggested: the right to know why (or how) an AI technology took a particular action (or made a decision) affecting a person.

Take, for example, an adverse credit decision by a bank that relies on machine learning algorithms to decide whether a customer should be given credit. Should that customer have the right to know why (or how) the system made the credit-worthiness decision? FastCompany writer Cliff Kuang explored this proposition in his recent article, “Can A.I. Be Taught to Explain Itself?” published in the New York Times (November 21, 2017).

If AI could explain itself, the banking customer might want to ask it what kind of training data was used and whether the data was biased, or whether there was an errant line of python coding to blame, or whether the AI gave the appropriate weight to the customer’s credit history. Given the nature of AI technologies, some of these questions, and even more general ones, may only be answered by opening the AI black box. But even then it may be impossible to pinpoint how the AI technology made its decision. In Europe, “tell me why/how” regulations are expected to become effective in May 2018. As I will discuss in a future post, many practical obstacles face those wishing to build a statute or regulatory framework around the right of consumers to demand from businesses that their AI explain why it made or took a particular adverse action.

Regulation of AI will likely happen. In fact, we are already seeing the beginning of direct legislative/regulatory efforts aimed at the autonomous driving industry. Whether interest in expanding those efforts to other AI technologies grows or lags may depend at least in part on whether people believe they have personal rights at stake in AI, and whether those rights are being protected by current laws and regulations.

Artificial Intelligence Won’t Achieve Legal Inventorship Status Anytime Soon

Imagine a deposition in which an inventor is questioned about her conception and reduction to practice of an invention directed to a chemical product worth billions of dollars to her company. Testimony reveals how artificial intelligence software, assessing huge amounts of data, identified the patented compound and the compound’s new uses in helping combat disease. The inventor states that she simply performed tests confirming the compound’s qualities and its utility, which the software had already determined. The attorney taking the deposition moves to invalidate the patent on the basis that the patent does not identify the true inventor. The true inventor, the attorney argues, was the company’s AI software.

Seem farfetched? Maybe not in today’s AI world. AI tools can spot cancer and other problems in diagnostic images, as well as identify patient-specific treatments. AI software can identify workable drug combinations for effectively combating pests. AI can predict biological events emerging in hotspots on the other side of the world, even before they’re reported by local media and officials. And lawyers are becoming more aware of AI through use of machine learning tools to predict the relevance of case law, answer queries about how a judge might respond to a particular set of facts, and assess the strength of contracts, among other tools. So while the above deposition scenario is hypothetical, it seems far from unrealistic.

One thing is for sure, however; an AI program will not be named as an inventor or joint inventor on a patent any time soon. At least not until Congress amends US patent laws to broaden the definition of “inventor” and the Supreme Court clarifies what “conception” of an invention means in a world filled with artificially-intelligent technologies.

That’s because US patent laws are intended to protect the natural intellectual output of humans, not the artificial intelligence of algorithms. Indeed, Congress left little wiggle room when it defined “inventor” to mean an “individual,” or in the case of a joint invention, the “individuals” collectively who invent or discover the subject matter of an invention. And the Supreme Court has endorsed a human-centric notion of inventorship. This has led courts overseeing patent disputes to repeatedly remind us that “conception” is the touchstone of inventorship, where conception is defined as the “formation in the mind of the inventor, of a definite and permanent idea of the complete and operative invention, as it is hereafter to be applied in practice.”

But consider this. What if “in the mind of” were struck from the definition of “conception” and inventorship? Under that revised definition, an AI system might indeed be viewed as conceiving an invention.

By way of example, let’s say the same AI software and the researcher from the above deposition scenario were participants behind the partition in a classic Turing Test. Would an interrogator be able to distinguish the AI inventor from the natural intelligence inventor if the test for conception of the chemical compound invention is reduced to examining whether the chemical compound idea was “definite” (not vague), “permanent” (fixed), “complete,” “operative” (it works as conceived), and has a practical application (real world utility)? If you were the interrogator in this Turing Test, would you choose the AI software or the researcher who did the follow-up confirmatory testing?

Those who follow patent law may see the irony of legally recognizing AI software as an “inventor” if it “conceives” an invention, when the very same software would likely face an uphill battle being patented by its developers because of the apparent “abstract” nature of many software algorithms.

In any case, for now the question of whether inventorship and inventions should be assessed based on their natural or artificial origin may merely be an academic one. But that may need to change when artificial intelligence development produces artificial general intelligence (AGI) that is capable of performing the same intellectual tasks that a human can.

Marketing “Artificial Intelligence” Needs Careful Planning to Avoid Trademark Troubles

As the market for all things artificial intelligence continues heating up, companies are looking for ways to align their products, services, and entire brands with “artificial intelligence” designations and phrases common in the surging artificial intelligence industry, including variants such as “AI,” “deep,” “neural,” and others. Reminiscent of the dot.com era of the early 2000’s, when companies rushed to market with “i-” or “e-” prefixes or appended “.com” names, today’s artificial intelligence startups are finding traction with artificial intelligence-related terms and corresponding “.AI” domains. The proliferation of AI marketing, however, may lead to brand and domain disputes. But a carefully-planned intellectual property strategy may help avoid potential risks down the road, as the recent case Stella.ai, Inc. v. Stellar A.I, Inc., filed in the U.S. District Court for the Northern District of California, demonstrates.

Artificial Intelligence startups face plenty of challenges getting their businesses up and going. The last things they want to worry about is unexpected trademark litigation involving their “AI” brand and domain names. Fortunately, some practical steps taken early may help reduce the risk of such problems.

According to court filings, New York City-based Stella.AI, Inc., provider of a jobs matching website, claims that its “stella.AI” website domain has been in use since March 2016, and its STELLA trademark since February 2016 (its U.S. federal trademark applications was reportedly published for opposition in April 2016 by the US Patent and Trademark Office). Palo Alto-based talent and employment agency Stellar A.I., formerly JobGenie, obtained its “stellar.ai” domain and sought trademark status for STELLAR.AI in January 2017, a move, Stella.ai claims, was prompted after JobGenie learned of Stella.AI, Inc.’s domain. Stella.AI’s complaint alleges unfair competition and false designation of origin due to a confusingly-similar mark and domain name. It sought monetary damages and the transfer of the stellar.ai domain.

In its answer to the complaint, Stellar A.I. says that it created, used, and marketed its services under the STELLAR.AI mark in good faith without prior knowledge of Stella.AI, Inc.’s mark, and in any case, any infringement of the STELLA mark was unintentional.

Artificial Intelligence startups face plenty of challenges getting their businesses up and going. The last things they want to worry about is unexpected trademark litigation involving their “AI” brand and domain names. Fortunately, some practical steps taken early may help reduce the risk of such problems.

As a start, marketers should consider thoroughly searching for conflicting federal, state, and common law uses of a planned company, product, or service name, and they should also consider evaluating corresponding domains as part of an early branding strategy. Trademark searches often reveal other, potentially confusingly-similar, uses of a trademark. Plenty of search firms offer search services, and they will return a list of trademarks that might present problems. If you want to conduct your own search, a good place to start might be the US Patent and Trademark Office’s TESS database, which can be searched to identify federal trademark registrations and pending trademark applications. Evaluating the search results should be done with the assistance of the company’s intellectual property attorney.

It is also good practice to look beyond obtaining a single top-level domain for a company and its brands. For example, if “xyzco.ai” is in play as a possible company “AI” domain name, also consider “xyzco.com” and others top-level domains to prevent someone else from getting their hands on your name. Moreover, consider obtaining domains embodying possible shortcuts and misspellings that prospective customers might use (i.e., “xzyco.ai” transposes two letters).

Marketers would be wise to also exercise caution when using competitor’s marks on their company website, although making legitimate comparisons between competing products remains fair use even when the competing products are identified using their trademarks. In such situation, comparisons should clearly state that the marketer’s product is not affiliated with its competitor’s product, and website links to competitor’s products should be avoided.

While startups often focus limited resources on protecting their technology by filing patent applications (or by implementing a comprehensive trade secret policy), a startup’s intellectual property strategy should also consider trademark issues to avoid having to re-brand down the road, as Stellar A.I. did (their new name and domain are now “Stellares” and “stellares.ai,” respectively).

Do Artificial Intelligence Technologies Need Regulating?

At some point, yes. But when? And how?

Today, AI is largely unregulated by federal and state governments. That may change as technologies incorporating AI continue to expand into communications, education, healthcare, law, law enforcement, manufacturing, transportation, and other industries, and prominent scientists as well as lawmakers continue raising concerns about unchecked AI.

The only Congressional proposals directly aimed at AI technologies so far have been limited to regulating Highly Autonomous Vehicles (HAVs, or self-driving cars). In developing those proposals, the House Energy and Commerce Committee brought stakeholders to the table in June 2017 to offer their input. In other areas of AI development, however, technologies are reportedly being developed without the input of those whose knowledge and experience might provide acceptable and appropriate direction.

Tim Hwang, an early adopter of AI technology in the legal industry, says individual artificial intelligence researchers are “basically writing policy in code” that reflects personal perspectives or biases. Kate Darling, the co-founder of AI Now and an intellectual property attorney, speaking with Wired magazine, assessed the problem this way: “Who gets a seat at the table in the design of these systems? At the moment, it’s driven by engineering and computer science experts who are designing systems that touch everything from criminal justice to healthcare to education. But in the same way that we wouldn’t expect a federal judge to optimize a neural network, we shouldn’t be expecting an engineer to understand the workings of the criminal justice system.”

“Who gets a seat at the table in the design of these systems? At the moment, it’s driven by engineering and computer science experts who are designing systems that touch everything from criminal justice to healthcare to education. But in the same way that we wouldn’t expect a federal judge to optimize a neural network, we shouldn’t be expecting an engineer to understand the workings of the criminal justice system.”

Those concerns frame part of the debate over regulating the AI industry, but timing is another big question. Shivon Zilis, fund investor at Bloomberg Beta, cautions that AI technology is here and will become a very powerful technology, so the public discussion of regulation needs to happen now. Others, like Alphabet chairman Eric Schmidt, considers the government regulation debate premature.

A fundamental challenge for Congress and government regulators is how to regulate AI. As AI technologies advance from the simple to the super-intelligent, a one size fits all regulatory approach could cause more problems than it addresses. On the one end of the AI technology spectrum, simple AI systems may need little regulatory oversight. But on the other end of the spectrum, super-intelligent autonomous systems may be viewed as having rights, and thus a focused set of regulations may be more appropriate. The Information Technology Industry Council (ITI), a lobbying group, “encourage[s] governments to evaluate existing policy tools and use caution before adopting new laws, regulations, or taxes that may inadvertently or unnecessarily impede the responsible development and use of AI.”

Regulating the AI industry will require careful thought and planning. Government regulations are hard to get right, and they rarely please everyone. Regulate too much and economic activity can be stifled. Regulate too little (or not at all) and the consequences could be worse. Congress and regulators will also need to assess the impacts of AI-specific regulations on an affected industry years and decades down the road, a difficult task when market trends and societal acceptance of AI will likely alter the trajectory of the AI industry in possibly unforeseen ways.

But we may be getting ahead of ourselves. Kate Darling recently noted that stakeholders have not yet agreed on basic definitions for AI. For example, there is not even a universally-accepted definition today for what is a “robot.”

Sources:
June 2017 House Energy and Commerce Committee, Hearings on Self-Driving Cars

Wired Magazine: Why AI is Still Waiting for its Ethics Transplant

TechCrunch

Futurism

Gizmodo

Federal Circuit: AI, IoT, and Robotics in “Danger” Due to Uncertainty Surrounding Patent Abstraction Test

In Purepredictive, Inc. v. H2O.ai, Inc., the U.S. District Court for the Northern District of California (J. Orrick) granted Mountain View-based H2O.ai’s motion to dismiss a patent infringement complaint. In doing so, the court found that the claims of asserted U.S. patent 8,880,446 were invalid on the grounds that they “are directed to the abstract concept of the manipulation of mathematical functions and make use of computers only as tools, rather than provide a specific improvement on a computer-related technology.”

Decisions like this hardly make news these days, what with the frequency by which software patents are being invalidated by district courts across the country following the Supreme Court’s 2014 Alice Corp. Pty Ltd. v. CLS Bank decision. Perhaps that is why the U.S. Court of Appeals for the Federal Circuit, the specialized appeals court for patent cases based in Washington, DC, chose a recent case to publicly acknowledge that “great uncertainty yet remains” concerning Alice’s patent-eligibility test, despite the large number of post-Alice cases that have “attempted to provide practical guidance.”  Calling the uncertainty “dangerous” for some of today’s “most important inventions in computing” (specifically identifying medical diagnostics, artificial intelligence (AI), the Internet of Things (IoT), and robotics), the Federal Circuit expressed concern that perhaps Alice has gone too far, a belief shared by others, especially smaller technology companies whose value is tied to their software intellectual property.

Utah-based Purepredictive says its ‘446 patent involves “AI driving machine learning ensembling.” The district court characterized the patent as being directed to a software method that performs “predictive analytics” in three steps. In the method’s first step, the court said, it receives data and generates “learned functions,” or, for example, regressions from that data. Second, it evaluates the effectiveness of those learned functions at making accurate predictions based on the test data. Finally, it selects the most effective learned functions and creates a rule set for additional data input. This method, the district court found, is merely “directed to a mental process” performed by a computer, and “the abstract concept of using mathematical algorithms to perform predictive analytics” by collecting and analyzing information.

Alice critics have long pointed to the subjective nature of Alice’s patent-eligibility test. Under Alice, for subject matter of a patent claim to be patent eligible under 35 U.S.C. § 101, it may not be “directed to” a patent-ineligible concept, i.e., a law of nature, natural phenomenon, or abstract idea. If it is, however, it may nevertheless be patentable subject matter if the particular elements of the claim, considered both individually and as an ordered combination, add enough to transform the nature of the claim into a patent-eligible application. This two-part test has led to the invalidation of many software patents as “abstract,” and presents an obstacle for inventors of new software tools seeking patent protection for their inventions.

In the Purepredictive case, the district court found that the claim’s method “are mathematical processes that not only could be performed by humans but also go to the general abstract concept of predictive analytics rather than any specific application.” The “could be performed by humans” query would seem problematic for many software-based patent claims, including those directed to AI algorithms, despite the recognition that humans could never perform the same feat as many AI algorithms in a lifetime due to the enormous domain space these algorithms are tasked with evaluating.

In any event, while Alice’s abstract test will continue to pose challenges to those seeking patents, time will tell whether it will have “dangerous” impacts on the burgeoning AI, IoT, and robotics industries suggested by the Federal Circuit.

Sources:

Purepredictive, Inc. v. H2O.AI, Inc., slip op., No. 17-cv-03049-WHO (N.D. Cal. Aug. 29, 2017).

Smart Systems Innovations, LLC v. Chicago Transit Authority, slip. op. No. 2016-1233 (Fed. Cir. Oct. 18, 2017) (citing Alice Corp. Pty Ltd. v. CLS Bank, 134 S. Ct. 2347, 2354-55 (2014)).