A Proposed AI Task Force to Confront Talent Shortage and Workforce Changes

Just over a month after House and Senate commerce committees received companion bills recommending a federal task force to globally examine the “FUTURE” of Artificial Intelligence in the United States (H.R. 4625; introduced Dec. 12, 2017), a House education and workforce committee is set to consider a bill calling for a task force assessment of the impacts of AI technologies on the US workforce.

If enacted, the “Artificial Intelligence Job Opportunities and Background Summary Act of 2018,” or the “AI JOBS Act of 2018” (H.R. 4829; introduced Jan. 18, 2018), would require the Secretary of Labor to report on impacts and growth of AI, industries and workers who may be most impacted by AI, expertise and education needed in an AI economy (compared to today), an identification of workers who will experience expanded career opportunities from AI and those who may be vulnerable to career displacement, and ways to alleviate workforce displacement and prepare a future AI workforce.

Assessing these issues now is critical. Former Senator Tom Daschle and David Beier, in a recent opinion published in The Hill, see a “dramatic set of changes” in the nature of work in America as AI technologies become more entrenched in the US economy. Citing a McKinsey’s Global Institute’s study of 800 occupations, Daschle and Beier conclude that AI technologies will not cause net job losses. Rather, job losses will likely be offset by job changes and gains in fields such as healthcare, infrastructure development, energy, and in fields that do not exist today. They cite Gartner Research estimates suggesting millions of new jobs will be created directly or indirectly as a result of the AI economy.

Already there are more AI-related jobs than high-skilled workers to fill them. One popular professional networking site currently lists over 6,000 “artificial intelligence” jobs. Chinese internet giant Tencent estimates there are only 300,000 AI experts worldwide (recent estimates by Toronto-based Element AI puts that figure at merely 90,000 AI experts). In testimony this week before a House Information Technology subcommittee, Intel’s CTO Amir Khosrowshahi said that, “Workers need to have the right skills to create AI technologies and right now we have too few workers to do the job.” Huge salaries for newly-minted computer science PhDs will drive more to the field, but job openings are likely to outpace available talent even as record numbers of students enroll in machine learning and related AI classes at top US universities.

If AI job gains shift workers disproportionately toward high-skilled jobs, the result may be continued job opportunity inequality. A 2016 study by Georgetown University’s Center on Education and the Workforce found that “out of the 11.6 million jobs created in the post-recession economy, 11.5 million went to workers with at least some college education.” The study authors found that, since 2008, graduate degree workers had the most job gains (83%), predominantly in high-skill occupations, and college graduates saw the next highest job gains (57%), also in high-skill jobs. The highest job growth was seen in management, healthcare, and computer and mathematical sciences. These same fields are prime for a future influx of highly-skilled AI workers.

The US is not alone in raising concerns about job and workforce changes in an AI economy. The UK Parliament’s Artificial Intelligence Committee, for example, is confronting challenges in re-educating UK’s workforce to improve skills needed to work alongside AI systems. The US may need to do more to catch up, according to Mr. Khosrowshahi. “Current federal funding levels [in tech education],” he argued, “are not keeping pace with the rest of the industrialized world.”

The AI JOBS Act of 2018 presents an opportunity for US policymakers to develop novel approaches to address expected workforce shifts caused by an AI economy. If nothing is done, the US could find itself at a competitive disadvantage with increasing economic inequality.

New York City Task Force to Consider Algorithmic Harm

One might hear discussions about backpropagation, activation functions, and gradient descent when visiting an artificial intelligence company. But more recently, terms like bias and harm associated with AI models and products have entered tech’s vernacular. These issues also have the attention of many outside of the tech world following reports of AI systems performing better for some users than for others when making life-altering decisions about prison sentences, creditworthiness, and job hiring, among others.

Considering the recent number of accepted conference papers about algorithmic bias, AI technologists, ethicists, and lawyers seems to be proactively addressing the issue by sharing with each other various technical and other solutions. At the same time, at least one legislative body–the New York City Council–has decided to explore ways to regulate AI technology with an unstated goal of rooting out bias (or at least revealing its presence) by making AI systems more transparent.

New York City’s passage of the “Automated decision systems used by agencies” law (NYC Local Law No. 49 of 2018, effective January 11, 2018), creates a task force under the aegis of Mayor de Blasio’s office. The task force will convene no later than early May 2018 for the purpose of identifying automated decision systems used by New York City government agencies, developing procedures for identifying and remedying harm, developing a process for public review, and assessing the feasibility of archiving automated decision systems and relevant data.

The law defines an “automated decision system” as:

“a computerized implementations of algorithms, including those derived from machine learning or other data processing or artificial intelligence techniques, which are used to make or assist in making decisions.”

The law defines an “agency automated decision system” as:

“an automated decision system used by an agency to make or assist in making decisions concerning rules, policies or actions implemented that impact the public.”

While the law does not specifically call out bias, the source of algorithmic unfairness and harm can be traced in large part to biases in the data used to train algorithmic models. Data can be inherently biased when it reflects the implicit values of a limited number of people involved in its collection and labelling, or when the data chosen for a project does not represent a full cross-section of society (which is partly the result of copyright and other restrictions on access to proprietary data sets, and the ease of access to older or limited data sets where groups of people may be unrepresented or underrepresented). A machine algorithm trained on this data will “learn” the biases, and can perpetuate bias when it is asked to make decisions.

Some argue that making algorithmic black boxes more transparent is key to understanding whether an algorithm is perpetuating bias. The New York City task force could recommend that software companies that provide automated decision systems to New York City agencies make their systems transparent by disclosing details about their models (including source code) and producing the data used to create their models.

Several stakeholders have already expressed concerns about disclosing algorithms and data to regulators. What local agency, for example, would have the resources to evaluate complex AI software systems? And how will source code and data, which may embody trade secrets and include personal information, be safeguarded from inadvertent public disclosure? And what recourse will model developers have before agencies turn over algorithms (and the underlying source code and data) in response to Freedom of Information requests and court-issued subpoenas?

Others have expressed concerns that regulating at the local level may lead to disparate and varying standards and requirements, placing a huge burden on companies. For example, New York City may impose standards different from those imposed by other local governments. Already, companies are having to deal with different state regulations governing AI-infused autonomous vehicles, and will soon have to contend with European Union regulations concerning algorithmic data (GDPR Art. 22; effective May 2018) that may be different than those imposed locally.

Before their job is done, New York City’s task force will likely hear from many stakeholders, each with their own special interests. In the end, the task force’s recommendations, especially those on how to remedy harm, will receive careful scrutiny, and not just by local stakeholders, but also by policymakers far removed from New York City, because as AI technology impacts on society grow, the pressure to regulate AI systems on a national basis is likely to grow.

Information and/or references used for this post came from the following:

NYC Local Law No. 49 of 2018 (available at here) and various hearing transcripts

Letter to Mayor Bill de Blasio, Jan. 22, 2018, from AI Now and others (available here)

EU General Data Protection Regulations (GDPR), Art. 22 (“Automated Individual Decision-Making, Including Profiling”), effective May 2018.

Dixon et. al “Measuring and Mitigating Unintended Bias in Text Classification”; AAAI 2018 (accepted paper).

W. Wallach and G. Marchant, “An Agile Ethical/Legal Model for the International and National Governance of AI and Robotics”; AAAI 2018 (accepted paper).

D. Tobey, “Software Malpractice in the Age of AI: A Guide for the Wary Tech Company”; AAAI 2018 (accepted paper).

Recognizing Individual Rights: A Step Toward Regulating Artificial Intelligence Technologies

In the movie Marjorie | Prime (August 2017), John Hamm plays an artificial intelligence version of Marjorie’s deceased husband, visible to Marjorie as a holographic projection in her beachfront home. As Marjorie (played by Lois Smith) interacts with Hamm’s Prime through a series of one-on-one conversations, the AI improves its cognition by observing and processing Marjorie’s emotional expressions, movements, and speech. The AI also learns from interactions with Marjorie’s son-in-law (Tim Robbins) and daughter (Geena Davis), as they recount highly personal and painful episodes of their lives. Through these interactions, Prime ends up possessing a collective knowledge greater and more personal and intimate than Marjorie’s original husband ever had.

Although not directly explored in the movie’s arc, the futuristic story touches on an important present-day debate about the fate of private personal data being uploaded to commercial and government AI systems, data that theoretically could persist in a memory device long after the end of the human lives from which the data originated, for as long as its owner chooses to keep it. It also raises questions about the fate of knowledge collected by other technologies perceiving other people’s lives, and to what extent these percepts, combined with people’s demographic, psychographic, and behavioristic characteristics, would be used to create sharply detailed personality profiles that companies and governments might abuse.

These are not entirely hypothetical issues to be addressed years down the road. Companies today provide the ability to create digital doppelgangers, or human digital twins, using AI technologies. And collecting personal information from people on a daily basis as they interact with digital assistants and other connected devices is not new. But as Marjorie|Prime and several non-cinematic AI technologies available today illustrate, AI systems allow the companies who build them unprecedented means for receiving, processing, storing, and taking actions based on some of the most personal information about people, including information about their present, past, and trending or future emotional states, which marketers for years have been suggesting are the keys to optimizing advertising content.

Congress recently acknowledged that “AI technologies are rapidly evolving in capability and application throughout society,” but the US currently has no federal policy towards AI and no part of the federal government has ownership of the advancement of AI technologies. Left unchecked in an unregulated market, as is largely the case today, AI technological advancements may trend in a direction that may be inconsistent with collective values and goals.

Identifying individual rights

One of the first questions those tasked with developing laws, regulations, and policies directed toward AI should ask is, what are the basic individual rights–rights that arise in the course of people interacting with AI technologies–that should be recognized? Answering that question will be key to ensuring that enacted laws and promulgated regulations achieve one of Congress’s recently stated goals: ensuring AI technologies benefit society. Answering that question now will be key to ensuring that policymakers have the necessary foundation in front of them and will not be unduly swayed by influential stakeholders as they take up the task of deciding how and/or when to regulate AI technologies.

Identify individual rights leads to their recognition, which leads to basic legal protections, whether in the form of legislation or regulation, or, initially, as common law from judges deciding if and how to remedy a harm to a person or property caused by an AI system. Fortunately, identifying individual rights is not a formidable task. The belief that people have a right to be let alone in their private lives, for example, established the basic premise for privacy laws in the US. Those same concerns about intrusion into personal lives ought to be among the first considerations by those tasked with formulating and developing AI legislation and regulations. The notion that people have a right to be let alone has led to the identification of other individual rights that could protect people in their interactions with AI systems. These include the right of transparency and explanation, the right of audit (with the objective to reveal bias, discrimination, and content filtering, and thus maintain accountability), the right to know when you are dealing with an AI system and not a human, and the right to be forgotten (that is, mandatory deletion of one’s personal data), among others.

Addressing individual rights, however, may not persuade everyone to trust AI systems, especially when AI creators cannot explain precisely the basis for certain actions taken by trained AI technologies. People want to trust that owners and developers of AI systems that use private personal data will employ the best safeguards to protect that data. Trust, but verify, may need to play a role in policy-making efforts even if policies appear to comprehensively address individual rights. Trust might be addressed by imposing specific reporting and disclosure requirements, such as those suggested by federal lawmakers in pending federal autonomous driving legislation.

In the end, however, laws and regulations developed with privacy and other individual rights in mind, that address data security and other concerns people have about trusting their data to AI companies, will invariably include gaps, omissions, and incomplete definitions. The result may be unregulated commercial AI systems, and AI businesses finding workarounds. In such instances, people may have limited options other than to fully opt out, or accept that individual AI technology developers’ work was motivated by ethical considerations and a desire to make something that benefits society. The pressure within many tech companies and startups to push new products out to the world every day, however, could make prioritizing ethical considerations a challenge. Many organizations focused on AI technologies, some of which are listed below, are working to make sure that doesn’t happen.

Rights, trust, and ethical considerations in commercial endeavors can get overshadowed by financial interests and the subjective interests and tastes of individuals. It doesn’t help that companies and policymakers may also feel that winning the race for AI dominance is a factor to be considered (which is not to say that such a consideration is antithetical to protecting individual rights). This underscores the need for thoughtful analysis, sooner rather than later, of the need for laws and regulations directed toward AI technologies.

To learn more about some of these issues, visit the websites of the following organizations, who are active in AI policy research: Access Now, AI Now, and Future of Life.

Legal Tech, Artificial Intelligence, and the Practice of Law in 2018

Due in part to a better understanding of available artificial intelligence legal tech tools, more lawyers will adopt and use AI technologies in 2018 than ever before. Better awareness will also drive creation and marketing of specialized legal practice areas within law firms focused on AI, more lawyers with AI expertise, new business opportunities across multiple practice groups, and the possibly of another round of Associate salary increases as the demand for AI talent both in-house and at law firms escalates in response to the continued expansion of AI in key industries.

The legal services industry is poised to adopt AI technologies at the highest level seen to date. But that doesn’t mean lawyers are currently unfamiliar with AI. In fact, AI technologies are widely used by legal practitioners, such as tech that power case law searches (websites services in which a user’s natural language search query is processed by a machine learning algorithm, and displays a ranked and sorted list of relevant cases), and that are used in electronic discovery of documents (predictive analytics software that finds and tags relevant electronic documents for production during a lawsuit based on a taxonomy of keywords and phrases agreed upon by the parties).

Newer AI-based software solutions, however, from companies like Kira and Ross, among dozens of others now available, may improve the legal services industry’s understanding of AI. These solutions offer increased efficiency, improved client service, and reduced operating costs. Efficiency, measured in terms of the time it takes to respond to client questions and the amount of billable hours expended, can translate into reduced operating costs for in-house counsel, law firm lawyers, judges, and their staffs, which is sure to get attention. AI-powered contract review software, for example, can take an agreement provided by opposing counsel and nearly instantaneously spot problems, a process that used to take an Associate or Partner a half-hour or more to accomplish, depending on the contract’s complexity. In-house counsel are wary of paying biglaw hourly rates for such mundane review work, so software that can perform some of the work seems like a perfect solution. The law firms and their lawyers that become comfortable using the latest AI-powered legal tech will be able to boast of being cutting edge and client-focused.

Lawyers and law firms with AI expertise are beginning to market AI capabilities on their websites to retain existing clients and capture new business, and this should increase in 2018. Firms are focusing efforts on industry segments most active in AI, such as tech, financial services (banks and financial technology companies or “fintech”), computer infrastructure (cloud services and chip makers), and other peripheral sectors, like those that make computer vision sensors and other devices for autonomous vehicles, robots, and consumer products, to name a few. Those same law firms are also looking at opportunities within the ever-expanding software as a service industry, which provides solutions for leveraging information from a company’s own data, such as human resources data, process data, quality assurance data, etc. Law practitioners who understand how these industries are using AI technologies, and AI’s limitations and potential biases, will have an edge when it comes to business development in the above-mentioned industry segments.

The impacts of AI on the legal industry in 2018 may also be reflected in law firm headcounts and salaries. Some reports suggest that the spread of AI legal tech could lead to a decrease in lawyer ranks, though most agree this will happen slowly and over several years.

At the same time, however, the increased attention directed at AI technologies by law firm lawyers and in-house counsel in 2018 may put pressure on law firms to adjust upward Associate salaries, like many did during the dot-com era when demand for new and mid-level lawyers equipped to handle cash-infused Silicon Valley startups’ IPO, intellectual property, and contract issues skyrocketed. A possible Associate salary spike in 2018 may also be a consequence of, and fueled by, huge salaries reportedly being paid in the tech sector, where big tech companies spent billions in 2016 and 2017 acquiring AI start-ups to add talent to their rosters. A recent report suggests annual salary and other incentives in the range of $350,000 to $500,000 being paid for newly-minted PhDs and to those with just a few years of AI experience. At those levels, recent college graduates contemplating law school and a future in the legal profession might opt instead to head to graduate school for a Masters or PhD in an AI field.

Autonomous Vehicles Get a Pass on Federal Statutory Liability, At Least for Now

Consumers may accept “good enough” when it comes to the performance of certain artificial intelligence systems, such as AI-powered Internet search results. But in the case of autonomous vehicles, a recent article in The Economist argues that those same consumers will more likely favor AI-infused vehicles demonstrating the “best” safety record.

If that holds true, a recent Congressional bill directed at autonomous vehicles–the so-called “Safely Ensuring Lives Future Deployment and Research in Vehicle Evolution Act,” or the SELF DRIVE Act (H.R. 3388)–should be well received by safety-conscious consumers. If signed into law, however, H.R. 3388 will require those same consumers to turn to the courts to determine liability and the magnitude of possible damages from vehicle crash events. That’s because the bill as currently written takes a pass on providing a statutory scheme for allocating crash-related liability.

H.R. 3388 passed the House by vote in early September 2017 (a similar bill is working its way in the Senate). Like several earlier proposals made public by the House Energy and Commerce Committee in connection with hearings in June 2017, the resolution is one of the first federal attempts at closely regulating AI systems embodied in a major consumer product (at the state level, at least twenty states have enacted laws regarding some aspect of self-driving vehicles). The stated purpose of the SELF DRIVE Act is to memorialize the Federal role in ensuring the safety of highly automated vehicles as it relates to design, construction, and performance, by encouraging the testing and deployment of such vehicles.

Section 8 of the bill is notable in that it would require future rulemaking to require manufacturers to inform consumers of the capabilities and limitations of a vehicle’s “driving automation system.” The bill would define “automated driving system” as “the hardware and software that are collectively capable of performing the entire dynamic driving task on a sustained basis, regardless of whether such system is limited to a specific operational design domain.” The bill would define “dynamic driving task” as “the real time operational and tactical functions required to operate a vehicle in on-road traffic,” including monitoring the driving environment via object and event detection, recognition, classification, and response preparation and object and event response execution.

Requiring manufacturers to inform consumers of the “capabilities and limitations” of a vehicle’s “driving automation system,” combined with published safety statistics, might steer educated consumers toward a particular make and model, much like other vehicle features like lane departure warning and automatic braking features do. In the case of liability for crashes, however, H.R. 3388 would amend existing federal laws to clarify that “compliance with a motor vehicle safety standard…does not exempt a person from liability at common law” and common law claims are not preempted.

In other words, vehicle manufacturers who meets all of H.R. 3388’s express standards (and future regulatory standards, which the bill mandates be written by the Department of Transportation and other federal agencies) could still be subject to common law causes of action, just as they are today.

Common law refers to the body of law developed over time by judges in the course of applying, to a set of facts and circumstances, relevant legal principles developed in previous court decisions (i.e., precedential decisions). Common law liability considers which party should be held responsible (and thus should pay damages) to another party who alleges some harm. Judicial common law decisions are thus generally viewed as being limited to a case’s specific facts and circumstances. Testifying before the House Committee on June 27, 2017, George Washington University Law School’s Alan Morrison described one of the criticisms lodged against relying solely on common law approaches to regulating autonomous vehicles and assessing liability: common law develops slowly over time.

“Traditionally, auto accidents and product liability rules have been matters of state law, generally developed by state courts, on a case by case basis,” Morrison said in prepared remarks for the record during testimony back in June. “Some scholars and others have suggested that [highly autonomous vehicles, HAVs] may be an area, like nuclear power was in the 1950s, in which liability laws, which form the basis for setting insurance premiums, require a uniform national liability answer, especially because HAVs, once they are deployed, will not stay within state boundaries. They argue that, in contrast to common law development, which can progress very slowly and depends on which cases reach the state’s highest court (and when), legislation can be acted on relatively quickly and comprehensively, without having to wait for the ‘right case’ to establish the [common] law.”

For those hoping Congress would use H.R. 3388 as an opportunity to issue targeted statutory schemes containing specific requirements covering the performance and standards for AI-infused autonomous vehicles, which might provide guidance for AI developers in many other industries, the resolution may be viewed as disappointing. H.R. 3388 leaves unanswered questions about who should be liable in cases where complex hardware-software systems contribute to injury or simply fail to work as advertised. Autonomous vehicles rely on sensors for “monitoring the driving environment via object and event detection” and software trained to identify objects from that data (i.e., “object and event…recognition, classification, and response preparation”). Should a sensor manufacturer be held liable if, for example, its sensor sampling rate is too slow and its field of vision too narrow, or the software provider who trained its computer vision algorithm on data from 50,000 vehicle miles traveled instead of 100,000, or the vehicle manufacturer who installed those hardware and software components? What if a manufacturer decides not to inform consumers of those limitations in its statement of “capabilities and limitations” of its “driving automation systems”? Should a federal law even attempt to set such detailed, one size fits all standards? As things stand now, answers to these questions may become apparent only after courts consider them in the course of deciding liability in common law injury and product liability cases.

The Economist authors predict that companies whose AI is behind the fewest autonomous vehicle crashes “will enjoy outsize benefits.” Quantifying those benefits, however, may need to wait until after potential liability issues in AI-related cases become clearer over time.

The AI Summit New York City: Takeaways For the Legal Profession

This week, business, technology, and academic thought leaders in Artificial Intelligence are gathered at The AI Summit in New York City, one of the premier international conferences offered for AI professionals. Below, I consider two of the three takeaways from Summit Day 1, published yesterday by AI Business, from the perspective of lawyers looking for opportunities in the burgeoning AI market.

“1. The tech landscape is changing fast – with big implications for businesses”

If a year from now your law practice has not fielded at least one query from a client about AI technologies, you are probably going out of your way to avoid the subject. It is almost universally accepted that AI technologies in one form or another will impact nearly every industry. Based on recently-published salary data, the industries most active in AI are tech (think Facebook, Amazon, Alphabet, Microsoft, Netflix, and many others), financial services (banks and financial technology companies or “fintech”), and computer infrastructure (Amazon, Nvidia, Intel, IBM, and many others; in areas such as chips for growing computational speed and throughput, and cloud computing for big data storage needs).

Of course, other industries are also seeing plenty of AI development. The automotive industry, for example, has already begun adopting machine learning, computer vision, and other AI technologies for autonomous vehicles. The robotics and chatbot industries have seen great strides lately, both in terms of humanoid robotic development, and consumer-machine interaction products such as stationary and mobile digital assistants (e.g., personal robotic assistants, as well as utility devices like autonomous vacuums). And of course the software as a service industry, which leverages information from a company’s own data, such as human resources data, process data, healthcare data, etc., seems to offers new software solutions to improve efficiencies every day.

All of this will translate into consumer adoption of specific AI technologies, which is reported to already be at 10% and growing. The fast pace of technology development and adoption may translate into new business opportunities for lawyers, especially for those who invest time to learning about AI technologies. After all, as in any area of law, understanding the challenges facing clients is essential for developing appropriate legal strategies, as well as for targeting business development resources.

“2. AI is a disruptive force today, not tomorrow – and business must adapt”

Adapt or be left behind is a cautionary tale, but one with plenty of evidence demonstrating that it holds true in many situations.

Lawyers and law firms as an institution are generally slow to change, often because things that disrupt the status quo are viewed through a cautionary lens. This is not surprising, given that a lawyer’s work often involves thoughtful spotting of potential risks, and finding ways to address those risks. A fast-changing business landscape racing to keep up with the latest in AI technologies may be seen as inherently risky, especially in the absence of targeted laws and regulations providing guidance, as is the case today in the AI industry. Even so, exploring how to adapt one’s law practice to a world filled with AI technologies should be near the top of every lawyer’s list of things to consider for 2018.

How Privacy Law’s Beginnings May Suggest An Approach For Regulating Artificial Intelligence

A survey conducted in April 2017 by Morning Consult suggests most Americans are in favor of regulating artificial intelligence technologies. Of 2,200 American adults surveyed, 71% said they strongly or somewhat agreed that there should be national regulation of AI, while only 14% strongly or somewhat disagreed (15% did not express a view).

Technology and business leaders speaking out on whether to regulate AI fall into one of two camps: those who generally favor an ex post, case-by-case, common law approach, and those who prefer establishing a statutory and regulatory framework that, ex ante, sets forth clear do’s and don’ts and penalties for violations. (If you’re interested in learning about the challenges of ex post and ex ante approaches to regulation, check out Matt Scherer’s excellent article, “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies,” published in the Harvard Journal of Law and Technology (2016)).

Advocates for a proactive regulatory approach caution that the alternative is fraught with predictable danger. Elon Musk for one, notes that, “[b]y the time we’re reactive in A.I., regulation’s too late.” Others, including leaders of some of the biggest AI technology companies in the industry, backed by lobbying organizations like the Information Technology Industry Council (ITI), feel that the hype surrounding AI does not justify quick Congressional action at this time.

Musk criticized this wait-and-see approach. “Normally, the way regulation’s set up,” he said, “a whole bunch of bad things happen, there’s a public outcry, and then after many years, a regulatory agency is set up to regulate that industry. There’s a bunch of opposition from companies who don’t like being told what to do by regulators, and it takes forever. That in the past has been bad but not something which represented a fundamental risk to the existence of civilization.”

Assuming AI regulation is inevitable, how should regulators (and legislators) approach such a formidable task? After all, AI technologies come in many forms, and their uses extend across multiple industries, including some already burdened with regulation. The history of privacy law may provide the answer.

Without question, privacy concerns, and privacy laws, touch on AI technology use and development. That’s because so much of today’s human-machine interactions involving AI are powered by user-provided or user-mined data. Search histories, images people appear in on social media, purchasing habits, home ownership details, political affiliations, and many other data points are well-known to marketers and others whose products and services rely on characterizing potential customers using, for example, machine learning algorithms, convolutional neural networks, and other AI tools. In the field of affective computing, human-robot and human-chatbot interactions are driven by a person’s voice, facial features, heart rate, and other physiological features, which are the percepts that the AI system collects, processes, stores, and uses when deciding actions to take, such as responding to user queries.

Privacy laws evolved from a period during late nineteenth century America when journalists were unrestrained in publishing sensational pieces for newspapers or magazines, basically the “fake news” of the time. This Yellow Journalism, as it was called, prompted legal scholars to express a view that people had a “right to be let alone,” setting in motion the development of a new body of law involving privacy. The key to regulating AI, as it was in the development of regulations governing privacy, may be the recognition of a specific personal right that is, or is expected to be, infringed by AI systems.

In the case of privacy, attorneys Samuel Warren and Louis Brandeis (later, Justice Brandeis) were the first to articulate a personal privacy right. In The Right of Privacy, published in the Harvard Law Review in 1890, Warren and Brandeis observed that “the press is overstepping in every direction the obvious bounds of propriety and of decency. Gossip…has become a trade.” They contended that “for years there has been a feeling that the law must afford some remedy for the unauthorized circulation of portraits of private persons.” They argued that a right of privacy was entitled to recognition because “in every [] case the individual is entitled to decide whether that which is his shall be given to the public.” A violation of the person’s right of privacy, they wrote, should be actionable.

Soon after, courts began recognizing the right of privacy in civil cases. By 1960, in his seminal review article entitled Privacy (48 Cal.L.Rev 383), William Prosser wrote, “In one form or another,” the right of privacy “was declared to exist by the overwhelming majority of the American courts.” That led to uniform standards. Some states enacted limited or sweeping state-specific statutes, replacing the common law with statutory provisions and penalties. Federal appeals courts weighed in when conflicts between state law arose. This slow progression from initial recognition of a personal privacy right in 1890, to today’s modern statutes and expansive development of common law, won’t appeal to those pushing for regulation of AI now.

Even so, the process has to begin somewhere, and it could very well start with an assessment of the personal rights that should be recognized arising from interactions with or the use of AI technologies. Already, personal rights recognized by courts and embodied in statutes apply to AI technologies. But there is one personal right, potentially unique to AI technologies, that has been suggested: the right to know why (or how) an AI technology took a particular action (or made a decision) affecting a person.

Take, for example, an adverse credit decision by a bank that relies on machine learning algorithms to decide whether a customer should be given credit. Should that customer have the right to know why (or how) the system made the credit-worthiness decision? FastCompany writer Cliff Kuang explored this proposition in his recent article, “Can A.I. Be Taught to Explain Itself?” published in the New York Times (November 21, 2017).

If AI could explain itself, the banking customer might want to ask it what kind of training data was used and whether the data was biased, or whether there was an errant line of python coding to blame, or whether the AI gave the appropriate weight to the customer’s credit history. Given the nature of AI technologies, some of these questions, and even more general ones, may only be answered by opening the AI black box. But even then it may be impossible to pinpoint how the AI technology made its decision. In Europe, “tell me why/how” regulations are expected to become effective in May 2018. As I will discuss in a future post, many practical obstacles face those wishing to build a statute or regulatory framework around the right of consumers to demand from businesses that their AI explain why it made or took a particular adverse action.

Regulation of AI will likely happen. In fact, we are already seeing the beginning of direct legislative/regulatory efforts aimed at the autonomous driving industry. Whether interest in expanding those efforts to other AI technologies grows or lags may depend at least in part on whether people believe they have personal rights at stake in AI, and whether those rights are being protected by current laws and regulations.