Congress Takes Aim at the FUTURE of Artificial Intelligence

As the calendar turns over to 2018, artificial intelligence system developers will need to keep an eye on first of its kind legislation being considered in Congress. The “Fundamentally Understanding The Usability and Realistic Evolution of Artificial Intelligence Act of 2017,” or FUTURE of AI Act, is Congress’s first major step toward comprehensive regulation of the AI tech sector.

Introduced on December 22, 2017, companion bills S.2217 and H.R.4625 touch on a host of AI issues, their stated purposes mirroring concerns raised by many about possible problems facing society as AI technologies becomes ubiquitous. The bills propose to establish a federal advisory committee charged with reporting to the Secretary of Commerce on many of today’s hot button, industry-disrupting AI issues.

Definitions

Leaving the definition of “artificial intelligence” open for later modification, both bills take a broad brush at defining, inclusively, what an AI system is, what artificial general intelligence (AGI) means, and what are “narrow” AI systems, which presumably would each be treated differently under future laws and implementing regulations.

Under both measures, AI is generally defined as “artificial systems that perform tasks under varying and unpredictable circumstances, without significant human oversight, or that can learn from their experience and improve their performance,” and encompass systems that “solve tasks requiring human-like perception, cognition, planning, learning, communication, or physical action.” According to the bills’ sponsors, the more “human-like the system within the context of its tasks, the more it can be said to use artificial intelligence.”

While those definitions and descriptions include plenty of ambiguity, characteristic of early legislative efforts, the bills also provide several clarifying examples: AI involves technologies that think like humans, such as cognitive architectures and neural networks; those that act like humans, such as systems that can pass the Turing test or other comparable test via natural language processing, knowledge representation, automated reasoning, and learning; those using sets of techniques, including machine learning, that seek to approximate some cognitive task; and AI technologies that act rationally, such as intelligent software agents and embodied robots that achieve goals via perception, planning, reasoning, learning, communicating, decision making, and acting.

The bills describe AGI as “a notional future AI system exhibiting apparently intelligent behavior at least as advanced as a person across the range of cognitive, emotional, and social behaviors,” which is generally consistent with how many others view the concept of an AGI system.

So-called narrow AI is viewed as an AI system that addresses specific application areas such as playing strategic games, language translation, self-driving vehicles, and image recognition. Plenty of other AI technologies today employ what the sponsors define as narrow AI.

The FUTURE of AI Committee

Both the House and Senate versions would establish a FUTURE of AI advisory committee made up of government and private-sector members tasked with evaluating and reporting on AI issues.

The bills emphasize that the committee should consider accountability and legal rights issues, including identifying where responsibility lies for violations of laws by an AI system, and assessing the compatibility of international regulations involving privacy rights of individuals who are or will be affected by technological innovation relating to AI. The committee will evaluate whether advancements in AI technologies have or will outpace the legal and regulatory regimes implemented to protect consumers, and how existing laws, including those concerning data access and privacy (as discussed here), should be modernized to enable the potential of AI.

The committee will study workforce impacts, including whether and how networked, automated, AI applications and robotic devices will displace or create jobs and how any job-related gains from AI can be maximized. The committee will also evaluate the role ethical issues should take in AI development, including whether and how to incorporate ethical standards in the development and implementation of AI, as suggested by groups such as IEEE’s Global Initiative on Ethics of Autonomous and Intelligent Systems.

The committee will consider issues of machine learning bias through core cultural and societal norms, including how bias can be identified and eliminated in the development of AI and in the algorithms that support AI technologies. The committee will focus on evaluating the selection and processing of data used to train AI, diversity in the development of AI, the ways and places the systems are deployed and the potential harmful outcomes, and how ongoing dialogues and consultations with multi-stakeholder groups can maximize the potential of AI and further development of AI technologies that can benefit everyone inclusively.

The FUTURE of AI committee will also consider issues of competitiveness of the United States, such as how to create a climate for public and private sector investment and innovation in AI, and the possible benefits and effects that the development of AI may have on the economy, workforce, and competitiveness of the United States. The committee will be charged with reviewing AI-related education; open sharing of data and the open sharing of research on AI; international cooperation and competitiveness; opportunities for AI in rural communities (that is, how the Federal Government can encourage technological progress in implementation of AI that benefits the full spectrum of social and economic classes); and government efficiency (that is, how the Federal Government utilizes AI to handle large or complex data sets, how the development of AI can affect cost savings and streamline operations in various areas of government operations, including health care, cybersecurity, infrastructure, and disaster recovery).

Non-profits like AI Now and Future of Life, among others, are also considering many of the same issues. And while those groups primarily rely on private funding, the FUTURE of AI advisory committee will be funded through Congressional appropriations or through contributions “otherwise made available to the Secretary of Commerce,” which may include donation from private persons and non-federal entities that have a stake in AI technology development. The bills limit private donations to less than or equal to 50% of the committee’s total funding from all sources.

The bills’ sponsors says that AI’s evolution can greatly benefit society by powering the information economy, fostering better informed decisions, and helping unlock answers to questions that are presently unanswerable. Their sentiment that fostering the development of AI should be done in a way that maximizes AI’s benefit to society provides a worthy goal for the FUTURE of AI advisory committee’s work. But it also suggests how AI companies may wish to approach AI technology development efforts, especially in the interim period before future legislation becomes law.

How Privacy Law’s Beginnings May Suggest An Approach For Regulating Artificial Intelligence

A survey conducted in April 2017 by Morning Consult suggests most Americans are in favor of regulating artificial intelligence technologies. Of 2,200 American adults surveyed, 71% said they strongly or somewhat agreed that there should be national regulation of AI, while only 14% strongly or somewhat disagreed (15% did not express a view).

Technology and business leaders speaking out on whether to regulate AI fall into one of two camps: those who generally favor an ex post, case-by-case, common law approach, and those who prefer establishing a statutory and regulatory framework that, ex ante, sets forth clear do’s and don’ts and penalties for violations. (If you’re interested in learning about the challenges of ex post and ex ante approaches to regulation, check out Matt Scherer’s excellent article, “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies,” published in the Harvard Journal of Law and Technology (2016)).

Advocates for a proactive regulatory approach caution that the alternative is fraught with predictable danger. Elon Musk for one, notes that, “[b]y the time we’re reactive in A.I., regulation’s too late.” Others, including leaders of some of the biggest AI technology companies in the industry, backed by lobbying organizations like the Information Technology Industry Council (ITI), feel that the hype surrounding AI does not justify quick Congressional action at this time.

Musk criticized this wait-and-see approach. “Normally, the way regulation’s set up,” he said, “a whole bunch of bad things happen, there’s a public outcry, and then after many years, a regulatory agency is set up to regulate that industry. There’s a bunch of opposition from companies who don’t like being told what to do by regulators, and it takes forever. That in the past has been bad but not something which represented a fundamental risk to the existence of civilization.”

Assuming AI regulation is inevitable, how should regulators (and legislators) approach such a formidable task? After all, AI technologies come in many forms, and their uses extend across multiple industries, including some already burdened with regulation. The history of privacy law may provide the answer.

Without question, privacy concerns, and privacy laws, touch on AI technology use and development. That’s because so much of today’s human-machine interactions involving AI are powered by user-provided or user-mined data. Search histories, images people appear in on social media, purchasing habits, home ownership details, political affiliations, and many other data points are well-known to marketers and others whose products and services rely on characterizing potential customers using, for example, machine learning algorithms, convolutional neural networks, and other AI tools. In the field of affective computing, human-robot and human-chatbot interactions are driven by a person’s voice, facial features, heart rate, and other physiological features, which are the percepts that the AI system collects, processes, stores, and uses when deciding actions to take, such as responding to user queries.

Privacy laws evolved from a period during late nineteenth century America when journalists were unrestrained in publishing sensational pieces for newspapers or magazines, basically the “fake news” of the time. This Yellow Journalism, as it was called, prompted legal scholars to express a view that people had a “right to be let alone,” setting in motion the development of a new body of law involving privacy. The key to regulating AI, as it was in the development of regulations governing privacy, may be the recognition of a specific personal right that is, or is expected to be, infringed by AI systems.

In the case of privacy, attorneys Samuel Warren and Louis Brandeis (later, Justice Brandeis) were the first to articulate a personal privacy right. In The Right of Privacy, published in the Harvard Law Review in 1890, Warren and Brandeis observed that “the press is overstepping in every direction the obvious bounds of propriety and of decency. Gossip…has become a trade.” They contended that “for years there has been a feeling that the law must afford some remedy for the unauthorized circulation of portraits of private persons.” They argued that a right of privacy was entitled to recognition because “in every [] case the individual is entitled to decide whether that which is his shall be given to the public.” A violation of the person’s right of privacy, they wrote, should be actionable.

Soon after, courts began recognizing the right of privacy in civil cases. By 1960, in his seminal review article entitled Privacy (48 Cal.L.Rev 383), William Prosser wrote, “In one form or another,” the right of privacy “was declared to exist by the overwhelming majority of the American courts.” That led to uniform standards. Some states enacted limited or sweeping state-specific statutes, replacing the common law with statutory provisions and penalties. Federal appeals courts weighed in when conflicts between state law arose. This slow progression from initial recognition of a personal privacy right in 1890, to today’s modern statutes and expansive development of common law, won’t appeal to those pushing for regulation of AI now.

Even so, the process has to begin somewhere, and it could very well start with an assessment of the personal rights that should be recognized arising from interactions with or the use of AI technologies. Already, personal rights recognized by courts and embodied in statutes apply to AI technologies. But there is one personal right, potentially unique to AI technologies, that has been suggested: the right to know why (or how) an AI technology took a particular action (or made a decision) affecting a person.

Take, for example, an adverse credit decision by a bank that relies on machine learning algorithms to decide whether a customer should be given credit. Should that customer have the right to know why (or how) the system made the credit-worthiness decision? FastCompany writer Cliff Kuang explored this proposition in his recent article, “Can A.I. Be Taught to Explain Itself?” published in the New York Times (November 21, 2017).

If AI could explain itself, the banking customer might want to ask it what kind of training data was used and whether the data was biased, or whether there was an errant line of python coding to blame, or whether the AI gave the appropriate weight to the customer’s credit history. Given the nature of AI technologies, some of these questions, and even more general ones, may only be answered by opening the AI black box. But even then it may be impossible to pinpoint how the AI technology made its decision. In Europe, “tell me why/how” regulations are expected to become effective in May 2018. As I will discuss in a future post, many practical obstacles face those wishing to build a statute or regulatory framework around the right of consumers to demand from businesses that their AI explain why it made or took a particular adverse action.

Regulation of AI will likely happen. In fact, we are already seeing the beginning of direct legislative/regulatory efforts aimed at the autonomous driving industry. Whether interest in expanding those efforts to other AI technologies grows or lags may depend at least in part on whether people believe they have personal rights at stake in AI, and whether those rights are being protected by current laws and regulations.

Artificial Intelligence Won’t Achieve Legal Inventorship Status Anytime Soon

Imagine a deposition in which an inventor is questioned about her conception and reduction to practice of an invention directed to a chemical product worth billions of dollars to her company. Testimony reveals how artificial intelligence software, assessing huge amounts of data, identified the patented compound and the compound’s new uses in helping combat disease. The inventor states that she simply performed tests confirming the compound’s qualities and its utility, which the software had already determined. The attorney taking the deposition moves to invalidate the patent on the basis that the patent does not identify the true inventor. The true inventor, the attorney argues, was the company’s AI software.

Seem farfetched? Maybe not in today’s AI world. AI tools can spot cancer and other problems in diagnostic images, as well as identify patient-specific treatments. AI software can identify workable drug combinations for effectively combating pests. AI can predict biological events emerging in hotspots on the other side of the world, even before they’re reported by local media and officials. And lawyers are becoming more aware of AI through use of machine learning tools to predict the relevance of case law, answer queries about how a judge might respond to a particular set of facts, and assess the strength of contracts, among other tools. So while the above deposition scenario is hypothetical, it seems far from unrealistic.

One thing is for sure, however; an AI program will not be named as an inventor or joint inventor on a patent any time soon. At least not until Congress amends US patent laws to broaden the definition of “inventor” and the Supreme Court clarifies what “conception” of an invention means in a world filled with artificially-intelligent technologies.

That’s because US patent laws are intended to protect the natural intellectual output of humans, not the artificial intelligence of algorithms. Indeed, Congress left little wiggle room when it defined “inventor” to mean an “individual,” or in the case of a joint invention, the “individuals” collectively who invent or discover the subject matter of an invention. And the Supreme Court has endorsed a human-centric notion of inventorship. This has led courts overseeing patent disputes to repeatedly remind us that “conception” is the touchstone of inventorship, where conception is defined as the “formation in the mind of the inventor, of a definite and permanent idea of the complete and operative invention, as it is hereafter to be applied in practice.”

But consider this. What if “in the mind of” were struck from the definition of “conception” and inventorship? Under that revised definition, an AI system might indeed be viewed as conceiving an invention.

By way of example, let’s say the same AI software and the researcher from the above deposition scenario were participants behind the partition in a classic Turing Test. Would an interrogator be able to distinguish the AI inventor from the natural intelligence inventor if the test for conception of the chemical compound invention is reduced to examining whether the chemical compound idea was “definite” (not vague), “permanent” (fixed), “complete,” “operative” (it works as conceived), and has a practical application (real world utility)? If you were the interrogator in this Turing Test, would you choose the AI software or the researcher who did the follow-up confirmatory testing?

Those who follow patent law may see the irony of legally recognizing AI software as an “inventor” if it “conceives” an invention, when the very same software would likely face an uphill battle being patented by its developers because of the apparent “abstract” nature of many software algorithms.

In any case, for now the question of whether inventorship and inventions should be assessed based on their natural or artificial origin may merely be an academic one. But that may need to change when artificial intelligence development produces artificial general intelligence (AGI) that is capable of performing the same intellectual tasks that a human can.

Marketing “Artificial Intelligence” Needs Careful Planning to Avoid Trademark Troubles

As the market for all things artificial intelligence continues heating up, companies are looking for ways to align their products, services, and entire brands with “artificial intelligence” designations and phrases common in the surging artificial intelligence industry, including variants such as “AI,” “deep,” “neural,” and others. Reminiscent of the dot.com era of the early 2000’s, when companies rushed to market with “i-” or “e-” prefixes or appended “.com” names, today’s artificial intelligence startups are finding traction with artificial intelligence-related terms and corresponding “.AI” domains. The proliferation of AI marketing, however, may lead to brand and domain disputes. But a carefully-planned intellectual property strategy may help avoid potential risks down the road, as the recent case Stella.ai, Inc. v. Stellar A.I, Inc., filed in the U.S. District Court for the Northern District of California, demonstrates.

Artificial Intelligence startups face plenty of challenges getting their businesses up and going. The last things they want to worry about is unexpected trademark litigation involving their “AI” brand and domain names. Fortunately, some practical steps taken early may help reduce the risk of such problems.

According to court filings, New York City-based Stella.AI, Inc., provider of a jobs matching website, claims that its “stella.AI” website domain has been in use since March 2016, and its STELLA trademark since February 2016 (its U.S. federal trademark applications was reportedly published for opposition in April 2016 by the US Patent and Trademark Office). Palo Alto-based talent and employment agency Stellar A.I., formerly JobGenie, obtained its “stellar.ai” domain and sought trademark status for STELLAR.AI in January 2017, a move, Stella.ai claims, was prompted after JobGenie learned of Stella.AI, Inc.’s domain. Stella.AI’s complaint alleges unfair competition and false designation of origin due to a confusingly-similar mark and domain name. It sought monetary damages and the transfer of the stellar.ai domain.

In its answer to the complaint, Stellar A.I. says that it created, used, and marketed its services under the STELLAR.AI mark in good faith without prior knowledge of Stella.AI, Inc.’s mark, and in any case, any infringement of the STELLA mark was unintentional.

Artificial Intelligence startups face plenty of challenges getting their businesses up and going. The last things they want to worry about is unexpected trademark litigation involving their “AI” brand and domain names. Fortunately, some practical steps taken early may help reduce the risk of such problems.

As a start, marketers should consider thoroughly searching for conflicting federal, state, and common law uses of a planned company, product, or service name, and they should also consider evaluating corresponding domains as part of an early branding strategy. Trademark searches often reveal other, potentially confusingly-similar, uses of a trademark. Plenty of search firms offer search services, and they will return a list of trademarks that might present problems. If you want to conduct your own search, a good place to start might be the US Patent and Trademark Office’s TESS database, which can be searched to identify federal trademark registrations and pending trademark applications. Evaluating the search results should be done with the assistance of the company’s intellectual property attorney.

It is also good practice to look beyond obtaining a single top-level domain for a company and its brands. For example, if “xyzco.ai” is in play as a possible company “AI” domain name, also consider “xyzco.com” and others top-level domains to prevent someone else from getting their hands on your name. Moreover, consider obtaining domains embodying possible shortcuts and misspellings that prospective customers might use (i.e., “xzyco.ai” transposes two letters).

Marketers would be wise to also exercise caution when using competitor’s marks on their company website, although making legitimate comparisons between competing products remains fair use even when the competing products are identified using their trademarks. In such situation, comparisons should clearly state that the marketer’s product is not affiliated with its competitor’s product, and website links to competitor’s products should be avoided.

While startups often focus limited resources on protecting their technology by filing patent applications (or by implementing a comprehensive trade secret policy), a startup’s intellectual property strategy should also consider trademark issues to avoid having to re-brand down the road, as Stellar A.I. did (their new name and domain are now “Stellares” and “stellares.ai,” respectively).

Do Artificial Intelligence Technologies Need Regulating?

At some point, yes. But when? And how?

Today, AI is largely unregulated by federal and state governments. That may change as technologies incorporating AI continue to expand into communications, education, healthcare, law, law enforcement, manufacturing, transportation, and other industries, and prominent scientists as well as lawmakers continue raising concerns about unchecked AI.

The only Congressional proposals directly aimed at AI technologies so far have been limited to regulating Highly Autonomous Vehicles (HAVs, or self-driving cars). In developing those proposals, the House Energy and Commerce Committee brought stakeholders to the table in June 2017 to offer their input. In other areas of AI development, however, technologies are reportedly being developed without the input of those whose knowledge and experience might provide acceptable and appropriate direction.

Tim Hwang, an early adopter of AI technology in the legal industry, says individual artificial intelligence researchers are “basically writing policy in code” that reflects personal perspectives or biases. Kate Darling, the co-founder of AI Now and an intellectual property attorney, speaking with Wired magazine, assessed the problem this way: “Who gets a seat at the table in the design of these systems? At the moment, it’s driven by engineering and computer science experts who are designing systems that touch everything from criminal justice to healthcare to education. But in the same way that we wouldn’t expect a federal judge to optimize a neural network, we shouldn’t be expecting an engineer to understand the workings of the criminal justice system.”

“Who gets a seat at the table in the design of these systems? At the moment, it’s driven by engineering and computer science experts who are designing systems that touch everything from criminal justice to healthcare to education. But in the same way that we wouldn’t expect a federal judge to optimize a neural network, we shouldn’t be expecting an engineer to understand the workings of the criminal justice system.”

Those concerns frame part of the debate over regulating the AI industry, but timing is another big question. Shivon Zilis, fund investor at Bloomberg Beta, cautions that AI technology is here and will become a very powerful technology, so the public discussion of regulation needs to happen now. Others, like Alphabet chairman Eric Schmidt, considers the government regulation debate premature.

A fundamental challenge for Congress and government regulators is how to regulate AI. As AI technologies advance from the simple to the super-intelligent, a one size fits all regulatory approach could cause more problems than it addresses. On the one end of the AI technology spectrum, simple AI systems may need little regulatory oversight. But on the other end of the spectrum, super-intelligent autonomous systems may be viewed as having rights, and thus a focused set of regulations may be more appropriate. The Information Technology Industry Council (ITI), a lobbying group, “encourage[s] governments to evaluate existing policy tools and use caution before adopting new laws, regulations, or taxes that may inadvertently or unnecessarily impede the responsible development and use of AI.”

Regulating the AI industry will require careful thought and planning. Government regulations are hard to get right, and they rarely please everyone. Regulate too much and economic activity can be stifled. Regulate too little (or not at all) and the consequences could be worse. Congress and regulators will also need to assess the impacts of AI-specific regulations on an affected industry years and decades down the road, a difficult task when market trends and societal acceptance of AI will likely alter the trajectory of the AI industry in possibly unforeseen ways.

But we may be getting ahead of ourselves. Kate Darling recently noted that stakeholders have not yet agreed on basic definitions for AI. For example, there is not even a universally-accepted definition today for what is a “robot.”

Sources:
June 2017 House Energy and Commerce Committee, Hearings on Self-Driving Cars

Wired Magazine: Why AI is Still Waiting for its Ethics Transplant

TechCrunch

Futurism

Gizmodo

Federal Circuit: AI, IoT, and Robotics in “Danger” Due to Uncertainty Surrounding Patent Abstraction Test

In Purepredictive, Inc. v. H2O.ai, Inc., the U.S. District Court for the Northern District of California (J. Orrick) granted Mountain View-based H2O.ai’s motion to dismiss a patent infringement complaint. In doing so, the court found that the claims of asserted U.S. patent 8,880,446 were invalid on the grounds that they “are directed to the abstract concept of the manipulation of mathematical functions and make use of computers only as tools, rather than provide a specific improvement on a computer-related technology.”

Decisions like this hardly make news these days, what with the frequency by which software patents are being invalidated by district courts across the country following the Supreme Court’s 2014 Alice Corp. Pty Ltd. v. CLS Bank decision. Perhaps that is why the U.S. Court of Appeals for the Federal Circuit, the specialized appeals court for patent cases based in Washington, DC, chose a recent case to publicly acknowledge that “great uncertainty yet remains” concerning Alice’s patent-eligibility test, despite the large number of post-Alice cases that have “attempted to provide practical guidance.”  Calling the uncertainty “dangerous” for some of today’s “most important inventions in computing” (specifically identifying medical diagnostics, artificial intelligence (AI), the Internet of Things (IoT), and robotics), the Federal Circuit expressed concern that perhaps Alice has gone too far, a belief shared by others, especially smaller technology companies whose value is tied to their software intellectual property.

Utah-based Purepredictive says its ‘446 patent involves “AI driving machine learning ensembling.” The district court characterized the patent as being directed to a software method that performs “predictive analytics” in three steps. In the method’s first step, the court said, it receives data and generates “learned functions,” or, for example, regressions from that data. Second, it evaluates the effectiveness of those learned functions at making accurate predictions based on the test data. Finally, it selects the most effective learned functions and creates a rule set for additional data input. This method, the district court found, is merely “directed to a mental process” performed by a computer, and “the abstract concept of using mathematical algorithms to perform predictive analytics” by collecting and analyzing information.

Alice critics have long pointed to the subjective nature of Alice’s patent-eligibility test. Under Alice, for subject matter of a patent claim to be patent eligible under 35 U.S.C. § 101, it may not be “directed to” a patent-ineligible concept, i.e., a law of nature, natural phenomenon, or abstract idea. If it is, however, it may nevertheless be patentable subject matter if the particular elements of the claim, considered both individually and as an ordered combination, add enough to transform the nature of the claim into a patent-eligible application. This two-part test has led to the invalidation of many software patents as “abstract,” and presents an obstacle for inventors of new software tools seeking patent protection for their inventions.

In the Purepredictive case, the district court found that the claim’s method “are mathematical processes that not only could be performed by humans but also go to the general abstract concept of predictive analytics rather than any specific application.” The “could be performed by humans” query would seem problematic for many software-based patent claims, including those directed to AI algorithms, despite the recognition that humans could never perform the same feat as many AI algorithms in a lifetime due to the enormous domain space these algorithms are tasked with evaluating.

In any event, while Alice’s abstract test will continue to pose challenges to those seeking patents, time will tell whether it will have “dangerous” impacts on the burgeoning AI, IoT, and robotics industries suggested by the Federal Circuit.

Sources:

Purepredictive, Inc. v. H2O.AI, Inc., slip op., No. 17-cv-03049-WHO (N.D. Cal. Aug. 29, 2017).

Smart Systems Innovations, LLC v. Chicago Transit Authority, slip. op. No. 2016-1233 (Fed. Cir. Oct. 18, 2017) (citing Alice Corp. Pty Ltd. v. CLS Bank, 134 S. Ct. 2347, 2354-55 (2014)).

Inaugural Post – AI Tech and the Law

Welcome. I am excited to present the first of what I hope will be many useful and timely posts covering issues arising at the crossroads of artificial intelligence technology and the law. My goal with this blog is to provide insightful discussion concerning the legal issues expected to affect individuals and businesses as they develop and interact with AI products and services. I also hope to engage with AI thought leaders in the legal industry as new AI technology-specific issues emerge. Join me by sharing your thoughts about AI and the law. If you’d like to see a particular issue discussed on these pages, I invite you to send me an email.

Much has already been written about the promises of AI and its ever-increasing role in daily life. AI technologies are unquestionably making their presence known in many impactful ways. Three billion smartphones in use worldwide, and many of them use one form of AI or another. Voice assistants driven by AI are appearing on kitchen countertops everywhere. Online search engines, powered by AI, deliver your search results. Select like/love/dislike/thumbs-down on your music streaming or news aggregating apps empowers AI algorithms to make recommendations for you.

Today’s tremendous AI industry expansion, driven by big data and enhanced computational power, will continue at an unprecedented rate in the future. We are seeing investors fund AI-focused startups across the globe. As Marc Cuban predicted earlier this year, the world’s first trillionaire will be an AI entrepreneur.

Not everyone, however, shares the same positive outlook concerning AI. Elon Musk, Bill Gates, Stephen Hawking and others have raised concerns. Many foresee problems arising as AI becomes ubiquitous, especially if businesses are left to develop AI systems without guidance. The media have written about displaced employees due to autonomous systems; bias, social justice, and civil rights concerns in big data; AI consumer product liability; privacy and data security; superintelligent systems, and other issues. Some have even predicted dire consequences from unchecked AI.

But with all the talk about AI–both positive and negative–businesses are operating in a vacuum of laws, regulations, and court opinions dealing directly with AI. Indeed, with only a few exceptions, most businesses today have little in the way of legal guidance about acceptable practices when it comes to developing and deploying their AI systems. While some advocate for a common law approach to dealing with AI problems on a case-by-case basis, others would like to see a more structured regulatory framework.

I look forward to considering these and others issues in the months to come.

Brian Higgins