In Your Face Artificial Intelligence: Regulating the Collection and Use of Face Data (Part II)

The technologies behind “face data” collection, detection, recognition, and affect (emotion) analysis were previously summarized. Use cases for face data, and reported concerns about the proliferation of face data collection efforts and instances of face data misuse were also briefly discussed.

In this follow-on post, a proposed “face data” definition is explored from a governance perspective, with the purpose of providing more certainty as to when heightened requirements ought to be imposed on those involved in face data collection, storage, and use.  This proposal is motivated in part by the increased risk of identity theft and other instances of misuse from unauthorized disclosure of face data, but also recognizes that overregulation could subject persons and entities to onerous requirements.

Illinois’ decade-old Biometric Information Privacy Act (“BIPA”) (740 ILCS 14/1 (2008)), which has been widely cited by privacy hawks and asserted against social media and other companies in US federal and various state courts (primarily Illinois and California), provides a starting point for a uniform face data definition. The BIPA defines “biometric identifier” to include a scan of a person’s face geometry. The scope and meaning of the definition, however, remains ambiguous despite close scrutiny by several courts. In Monroy v. Shutterfly, Inc., for example, a federal district court found that mere possession of a digital photograph of a person and “extraction” of information from such photograph is excluded from the BIPA:

“It is clear that the data extracted from [a] photograph cannot constitute “biometric information” within the meaning of the statute: photographs are expressly excluded from the [BIPA’s] definition of “biometric identifier,” and the definition of “biometric information” expressly excludes “information derived from items or procedures excluded under the definition of biometric identifiers.”

Slip. op. No. 16-cv-10984 (N.D. Ill. 2017). Despite that finding, the Monroy court concluded that a “scan of face geometry” under the statute’s definition includes a “scan” of a person’s face from a photograph (or a live scan of a person’s face geometry). Although not at issue in Monroy, the court did not address whether that BIPA applies when a scan of any part of a person’s face geometry from an image is insufficient to identify the person in the image. That is, the Monroy holding arguably applies to any data made by a scan, even if that data by itself cannot lead to identifying anyone.

By way of comparison, the European Union’s General Data Protection Regulation (GDPR), which governs “personal data” (i.e., any information relating to an identified or identifiable natural person), will regulate biometric information when it goes into effect in late May 2018. Like the BIPA, the GDPR will place restrictions on “personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data” (GDPR, Article 4) (emphasis added).  Depending on how EU nation courts interpret the GDPR generally, and Article 4 specifically, a process that creates any biometric data that relates to, or could lead to, or that allows one to identify a person, or allows one to confirm an identity of a person, is a potentially covered process under the GDPR.

Thus, to enhance clarity for potentially regulated individuals and companies dealing with US citizens, “face data” could be defined, as set forth below, in a way that considers a minimum quantity or quality of data below which a regulated entity would not be within the scope of the definition (and thus not subject to regulation):

“Face data” means data in the possession or control of a regulated entity obtained from a scan of a person’s face geometry or face attribute, as well as any information and data derived from or based on the geometry or attribute data, if in the aggregate the data in the possession or control of the regulated entity is sufficient for determining an identity of the person or the person’s emotional (physiological) state.

The term “determining an identity of the person or the person’s emotional (physiological) state” relates to any known computational or manual technique for identifying a person or that person’s emotions.

The term “is sufficient” is interpretable; it would need to be defined explicitly (or, as is often the case in legislation, left for the courts to fully interpret). The intent of “sufficient” is to permit the anonymization or deletion of data following the processing of video signals or images of a person’s face to avoid being categorized as possessing regulated face data (to the extent probabilistic models and other techniques could not be used to later de-anonymize or reconstruct the missing data and identify a person or that person’s emotional state). The burden of establishing the quality and quantity of face data that is insufficient for identification purposes should rest with the regulated entity that possesses or controls face data.

Face data could include data from the face of a “live” person captured by a camera (e.g., surveillance) as well as data extracted from existing media (e.g., stored images). It is not necessary, however, for the definition to encompass the mere virtual depiction or display of a person in a live video or existing image or video. Thus, digital pictures of friends or family on a personal smartphone would not be face data, and the owner of the phone should not be a regulated entity subject to face data governance. An app on that smartphone, however, that uses face detection algorithms to process the pictures for facial recognition and sends that data to a remote app server for storage and use (e.g., for extraction of emotion information) would create face data.

By way of other examples, a process involving pixel-level data extracted from an image (a type of “scan”) by a regulated entity  would create face data if that data, combined with any other data possessed or controlled by the entity, could be used in the aggregate to identify the person in the image or that person’s emotional state. Similarly, data and information reflecting changes in facial expressions by pixel-level comparisons of time-slice images from a video (also a type of scan) would be information derived from face data and thus would be regulated face data, assuming the derived data combined with other data owned or possessed could be used to identify the person in the image or the person’s emotional state.

Information about the relative positions of facial points based on facial action units could also be data derived from or based on the original scan and thus would be face data, assuming again that the data, combined with any other data possessed by a regulated entity, could be used to identify a person or that person’s emotional state. Classifications of a person’s emotional state (e.g., joy, surprise) based on extracted image data would also be information derived from or based on a person’s face data and thus would also be face data.

Features extracted using deep learning convolutions of an image of a person’s face could also be face data if the convolution information along with other data in the possession or control of a regulated entity could be used to identify a person or that person’s emotional state.

For banks and other institutions that use face recognition for authentication purposes, sufficient face data would obviously need to be in the banks possession at some point in time to positively identify a customer making a transaction. This could subject the institution to face data governance during that time period. In contrast, a social media platform that permits users to upload images of people but does not scan or otherwise process the images (such as by cross-referencing other existing data) would not create face data and thus would not subject the platform to face data governance, even if it also possessed tagged images of the same individuals in the uploaded images. Thus, the mere possession or control over images, even if the images could potentially contain identifying information, would not constitute face data. But, if a platform were to scan (process) the uploaded images for identification purposes or sell or provide the images uploaded by users to a third party that scans the images to extract face geometry or attributes data for purposes such as targeted advertising, could subject the platform and the third party to face data governance.

The proposed face data definition, which could be modified to include “body data” and “voice data,” is merely one example that US policymakers and stakeholders might consider in the course of assessing the scope of face data governance in the US.  The definition does not exclude the possibility that any number of exceptions, exclusions, and limitations could be implemented to avoid reaching actors and actions that should not be covered, while also maintaining consistency with existing laws and regulations. Also, the proposed definition is not intended to directly encompass specific artificial intelligence technologies used or created by a regulated entity to collect and use face data, including the underlying algorithms, models, networks, settings, hyper-parameters, processors, source code, etc.

In a follow-on post, possible civil penalties for harms caused by face data collection, storage, and use will be briefly considered, along with possible defenses a regulated person or entity may raise in litigation.

Republicans Propose Commission to Study Artificial Intelligence Impacts on National Security

Three Republican members of Congress are co-sponsoring a new bill (H.R. 5356) “To establish the National Security Commission on Artificial Intelligence.” Introduced by Rep. Stefanik (R-NY) on March 20, 2018, the bill would create a temporary 11-member Commission tasked with producing an initial report followed by comprehensive annual reports, each providing issue-specific recommendations about national security needs and related risks from advances in artificial intelligence, machine learning, and associated technologies.

Issues the Commission would review include AI competitiveness in the context of national and economic security, means to maintain a competitive advantage in AI (including machine learning and quantum computing), other country AI investment trends, workforce and education incentives to boost the number of AI workers, risks of advances in the military employment of AI by foreign countries, ethics, privacy, and data security, among others.

Unlike other Congressional bills of late (see H.R. 4625–FUTURE of AI Act; H.R. 4829–AI JOBS Act) that propose establishing committees under Executive Branch departments and constituted with both government employees and private citizens, H.R. 5356 would establish an independent Executive Branch commission made up exclusively of Federal employees appointed by Department of Defense and various Armed Services Committee members, with no private citizen members (ostensibly because of national security and security clearance issues).

Congressional focus on AI technologies has generally been limited to highly autonomous vehicles and vehicle safety, with other areas, such as military impacts, receiving much less attention. By way of contrast, the UK’s Parliament seems far ahead. The UK Parliament Select Committee on AI has already met over a dozen times since mid-2017 and its members have convened numerous public meetings to hear from dozens of experts and stakeholders representing various disciplines and economic sectors.

In Your Face Artificial Intelligence: Regulating the Collection and Use of Face Data (Part I)

Of all the personal information individuals agree to provide companies when they interact with online or app services, perhaps none is more personal and intimate than a person’s facial features and their moment-by-moment emotional states. And while it may seem that face detection, face recognition, and affect analysis (emotional assessments based on facial features) are technologies only sophisticated and well-intentioned tech companies with armies of data scientists and stack engineers are competent to use, the reality is that advances in machine learning, microprocessor technology, and the availability of large datasets containing face data have lowered entrance barriers to conducting robust face detection, face recognition, and affect analysis to levels never seen before.

In fact, anyone with a bit of programming knowledge can incorporate open-source algorithms and publicly available image data, train a model, create an app, and start collecting face data from app users. At the most basic entry point, all one really needs is a video camera with built-in face detection algorithms and access to tagged images of a person to start conducting facial recognition. And several commercial API’s exist making it relatively easy to tap into facial coding databases for use in assessing other’s emotional states from face data. If you’re not persuaded by the relative ease at which face data can be captured and used, just drop by any college (or high school) hackathon and see creative face data tech in action.

In this post, the uses of face data are considered, along with a brief summary of the concerns raised about collecting and using face and emotional data. Part II will explore options for face data governance, which include the possibility of new or stronger laws and regulations and policies that a self-regulating industry and individual stakeholders could develop.

The many uses of our faces

Today’s mobile and fixed cameras and AI-based face detection and recognition software enable real-time controlled access to facilities and devices. The same technology allows users to identify fugitive and missing persons in surveillance videos, private citizens interacting with police, and unknown persons of interest in online images.

The technology provides a means for conducting and verifying commercial transactions using face biometric information, tracking people automatically while in public view, and extracting physical traits from images and videos to supplement individual demographic, psychographic, and behavioristic profiles.

Face software and facial coding techniques and models are also making it easier for market researchers, educators, robot developers, and autonomous vehicle safety designers to assess emotional states of people in human-machine interactions.

These and other use cases are possible in part because of advances in camera technology, the proliferation of cameras (think smart phones, CCTVs, traffic cameras, laptop cameras, etc.) and social media platforms, where millions of images and videos are created and uploaded by users every day. Increased computer processing power has led to advances in face recognition and affect-based machine learning research and improved the ability of complex models to execute faster. As a result, face data is relatively easy to collect, process, and use.

One can easily image the many ways face data might be abused, and some of the abuses have already been reported. Face data and machine learning models have been improperly used to create pornography, for example, and to track individuals in stores and other public locations without notice and without seeking permission. Models based on face data have been reportedly developed for no apparent purpose other than for predictive classification of beauty and sexual orientation.

Face recognition models are also subject to errors. Misidentification, for example, is a weakness of face recognition and affect-based models. In fact, despite improvements, face recognition is not perfect. This can translate into false positive identifications. Obviously, tragic consequences can occur if the police or government agencies make decisions based on a false positive (or false negative) identity of a person.

Face data models have been shown to perform more accurately on persons with lighter skin color. And affect models, while raising fewer concerns compared to face recognition due mainly to the slower rate of adoption of the technology, may misinterpret emotions if culture, geography, gender, and other factors are not accounted for in training data.

Of course, instances of reported abuse, bias, and data breaches overshadow the many unreported positive uses and machine learning applications of face data. But as is often the case, problems tend to catch the eyes of policymakers, regulators, and legislators, though overreaction to hyped problems can result in a patchwork of regulations and standards that go beyond addressing the underlying concerns and cause unintended effects, such as possibly stifling innovation and reducing competitiveness.

Moreover, reactionary regulation doesn’t play well with fast-moving disruptive tech, such as face recognition and affective computing, where the law seems to always be in catch-up mode. Compounding the governance problem is the notion that regulators and legislators are not crystal ball readers who can see into the future. Indeed, future uses of face data technologies may be hard to imagine today.

Even so, what matters to many is what governments and companies are doing with still images and videos, and specifically how face data extracted from media are being used, sometimes without consent. These concerns raise questions of transparency, privacy laws, terms of service and privacy policy agreements, data ownership, ethics, and data breaches, among others. They also implicate issues of whether and when federal and state governments should tighten existing regulations and impose new regulations where gaps exist in face data governance.

With recent data breaches making headlines and policymakers and stakeholders gathering in 2018 to examine AI’s impacts, there is no better time than now to revisit the need for stronger laws and to develop new technical- and ethical-based standards and guidelines applicable to face data. The next post will explore these issues.

A Proposed AI Task Force to Confront Talent Shortage and Workforce Changes

Just over a month after House and Senate commerce committees received companion bills recommending a federal task force to globally examine the “FUTURE” of Artificial Intelligence in the United States (H.R. 4625; introduced Dec. 12, 2017), a House education and workforce committee is set to consider a bill calling for a task force assessment of the impacts of AI technologies on the US workforce.

If enacted, the “Artificial Intelligence Job Opportunities and Background Summary Act of 2018,” or the “AI JOBS Act of 2018” (H.R. 4829; introduced Jan. 18, 2018), would require the Secretary of Labor to report on impacts and growth of AI, industries and workers who may be most impacted by AI, expertise and education needed in an AI economy (compared to today), an identification of workers who will experience expanded career opportunities from AI and those who may be vulnerable to career displacement, and ways to alleviate workforce displacement and prepare a future AI workforce.

Assessing these issues now is critical. Former Senator Tom Daschle and David Beier, in a recent opinion published in The Hill, see a “dramatic set of changes” in the nature of work in America as AI technologies become more entrenched in the US economy. Citing a McKinsey’s Global Institute’s study of 800 occupations, Daschle and Beier conclude that AI technologies will not cause net job losses. Rather, job losses will likely be offset by job changes and gains in fields such as healthcare, infrastructure development, energy, and in fields that do not exist today. They cite Gartner Research estimates suggesting millions of new jobs will be created directly or indirectly as a result of the AI economy.

Already there are more AI-related jobs than high-skilled workers to fill them. One popular professional networking site currently lists over 6,000 “artificial intelligence” jobs. Chinese internet giant Tencent estimates there are only 300,000 AI experts worldwide (recent estimates by Toronto-based Element AI puts that figure at merely 90,000 AI experts). In testimony this week before a House Information Technology subcommittee, Intel’s CTO Amir Khosrowshahi said that, “Workers need to have the right skills to create AI technologies and right now we have too few workers to do the job.” Huge salaries for newly-minted computer science PhDs will drive more to the field, but job openings are likely to outpace available talent even as record numbers of students enroll in machine learning and related AI classes at top US universities.

If AI job gains shift workers disproportionately toward high-skilled jobs, the result may be continued job opportunity inequality. A 2016 study by Georgetown University’s Center on Education and the Workforce found that “out of the 11.6 million jobs created in the post-recession economy, 11.5 million went to workers with at least some college education.” The study authors found that, since 2008, graduate degree workers had the most job gains (83%), predominantly in high-skill occupations, and college graduates saw the next highest job gains (57%), also in high-skill jobs. The highest job growth was seen in management, healthcare, and computer and mathematical sciences. These same fields are prime for a future influx of highly-skilled AI workers.

The US is not alone in raising concerns about job and workforce changes in an AI economy. The UK Parliament’s Artificial Intelligence Committee, for example, is confronting challenges in re-educating UK’s workforce to improve skills needed to work alongside AI systems. The US may need to do more to catch up, according to Mr. Khosrowshahi. “Current federal funding levels [in tech education],” he argued, “are not keeping pace with the rest of the industrialized world.”

The AI JOBS Act of 2018 presents an opportunity for US policymakers to develop novel approaches to address expected workforce shifts caused by an AI economy. If nothing is done, the US could find itself at a competitive disadvantage with increasing economic inequality.

New York City Task Force to Consider Algorithmic Harm

One might hear discussions about backpropagation, activation functions, and gradient descent when visiting an artificial intelligence company. But more recently, terms like bias and harm associated with AI models and products have entered tech’s vernacular. These issues also have the attention of many outside of the tech world following reports of AI systems performing better for some users than for others when making life-altering decisions about prison sentences, creditworthiness, and job hiring, among others.

Considering the recent number of accepted conference papers about algorithmic bias, AI technologists, ethicists, and lawyers seems to be proactively addressing the issue by sharing with each other various technical and other solutions. At the same time, at least one legislative body–the New York City Council–has decided to explore ways to regulate AI technology with an unstated goal of rooting out bias (or at least revealing its presence) by making AI systems more transparent.

New York City’s passage of the “Automated decision systems used by agencies” law (NYC Local Law No. 49 of 2018, effective January 11, 2018), creates a task force under the aegis of Mayor de Blasio’s office. The task force will convene no later than early May 2018 for the purpose of identifying automated decision systems used by New York City government agencies, developing procedures for identifying and remedying harm, developing a process for public review, and assessing the feasibility of archiving automated decision systems and relevant data.

The law defines an “automated decision system” as:

“a computerized implementations of algorithms, including those derived from machine learning or other data processing or artificial intelligence techniques, which are used to make or assist in making decisions.”

The law defines an “agency automated decision system” as:

“an automated decision system used by an agency to make or assist in making decisions concerning rules, policies or actions implemented that impact the public.”

While the law does not specifically call out bias, the source of algorithmic unfairness and harm can be traced in large part to biases in the data used to train algorithmic models. Data can be inherently biased when it reflects the implicit values of a limited number of people involved in its collection and labelling, or when the data chosen for a project does not represent a full cross-section of society (which is partly the result of copyright and other restrictions on access to proprietary data sets, and the ease of access to older or limited data sets where groups of people may be unrepresented or underrepresented). A machine algorithm trained on this data will “learn” the biases, and can perpetuate bias when it is asked to make decisions.

Some argue that making algorithmic black boxes more transparent is key to understanding whether an algorithm is perpetuating bias. The New York City task force could recommend that software companies that provide automated decision systems to New York City agencies make their systems transparent by disclosing details about their models (including source code) and producing the data used to create their models.

Several stakeholders have already expressed concerns about disclosing algorithms and data to regulators. What local agency, for example, would have the resources to evaluate complex AI software systems? And how will source code and data, which may embody trade secrets and include personal information, be safeguarded from inadvertent public disclosure? And what recourse will model developers have before agencies turn over algorithms (and the underlying source code and data) in response to Freedom of Information requests and court-issued subpoenas?

Others have expressed concerns that regulating at the local level may lead to disparate and varying standards and requirements, placing a huge burden on companies. For example, New York City may impose standards different from those imposed by other local governments. Already, companies are having to deal with different state regulations governing AI-infused autonomous vehicles, and will soon have to contend with European Union regulations concerning algorithmic data (GDPR Art. 22; effective May 2018) that may be different than those imposed locally.

Before their job is done, New York City’s task force will likely hear from many stakeholders, each with their own special interests. In the end, the task force’s recommendations, especially those on how to remedy harm, will receive careful scrutiny, and not just by local stakeholders, but also by policymakers far removed from New York City, because as AI technology impacts on society grow, the pressure to regulate AI systems on a national basis is likely to grow.

Information and/or references used for this post came from the following:

NYC Local Law No. 49 of 2018 (available at here) and various hearing transcripts

Letter to Mayor Bill de Blasio, Jan. 22, 2018, from AI Now and others (available here)

EU General Data Protection Regulations (GDPR), Art. 22 (“Automated Individual Decision-Making, Including Profiling”), effective May 2018.

Dixon et. al “Measuring and Mitigating Unintended Bias in Text Classification”; AAAI 2018 (accepted paper).

W. Wallach and G. Marchant, “An Agile Ethical/Legal Model for the International and National Governance of AI and Robotics”; AAAI 2018 (accepted paper).

D. Tobey, “Software Malpractice in the Age of AI: A Guide for the Wary Tech Company”; AAAI 2018 (accepted paper).

Recognizing Individual Rights: A Step Toward Regulating Artificial Intelligence Technologies

In the movie Marjorie | Prime (August 2017), John Hamm plays an artificial intelligence version of Marjorie’s deceased husband, visible to Marjorie as a holographic projection in her beachfront home. As Marjorie (played by Lois Smith) interacts with Hamm’s Prime through a series of one-on-one conversations, the AI improves its cognition by observing and processing Marjorie’s emotional expressions, movements, and speech. The AI also learns from interactions with Marjorie’s son-in-law (Tim Robbins) and daughter (Geena Davis), as they recount highly personal and painful episodes of their lives. Through these interactions, Prime ends up possessing a collective knowledge greater and more personal and intimate than Marjorie’s original husband ever had.

Although not directly explored in the movie’s arc, the futuristic story touches on an important present-day debate about the fate of private personal data being uploaded to commercial and government AI systems, data that theoretically could persist in a memory device long after the end of the human lives from which the data originated, for as long as its owner chooses to keep it. It also raises questions about the fate of knowledge collected by other technologies perceiving other people’s lives, and to what extent these percepts, combined with people’s demographic, psychographic, and behavioristic characteristics, would be used to create sharply detailed personality profiles that companies and governments might abuse.

These are not entirely hypothetical issues to be addressed years down the road. Companies today provide the ability to create digital doppelgangers, or human digital twins, using AI technologies. And collecting personal information from people on a daily basis as they interact with digital assistants and other connected devices is not new. But as Marjorie|Prime and several non-cinematic AI technologies available today illustrate, AI systems allow the companies who build them unprecedented means for receiving, processing, storing, and taking actions based on some of the most personal information about people, including information about their present, past, and trending or future emotional states, which marketers for years have been suggesting are the keys to optimizing advertising content.

Congress recently acknowledged that “AI technologies are rapidly evolving in capability and application throughout society,” but the US currently has no federal policy towards AI and no part of the federal government has ownership of the advancement of AI technologies. Left unchecked in an unregulated market, as is largely the case today, AI technological advancements may trend in a direction that may be inconsistent with collective values and goals.

Identifying individual rights

One of the first questions those tasked with developing laws, regulations, and policies directed toward AI should ask is, what are the basic individual rights–rights that arise in the course of people interacting with AI technologies–that should be recognized? Answering that question will be key to ensuring that enacted laws and promulgated regulations achieve one of Congress’s recently stated goals: ensuring AI technologies benefit society. Answering that question now will be key to ensuring that policymakers have the necessary foundation in front of them and will not be unduly swayed by influential stakeholders as they take up the task of deciding how and/or when to regulate AI technologies.

Identify individual rights leads to their recognition, which leads to basic legal protections, whether in the form of legislation or regulation, or, initially, as common law from judges deciding if and how to remedy a harm to a person or property caused by an AI system. Fortunately, identifying individual rights is not a formidable task. The belief that people have a right to be let alone in their private lives, for example, established the basic premise for privacy laws in the US. Those same concerns about intrusion into personal lives ought to be among the first considerations by those tasked with formulating and developing AI legislation and regulations. The notion that people have a right to be let alone has led to the identification of other individual rights that could protect people in their interactions with AI systems. These include the right of transparency and explanation, the right of audit (with the objective to reveal bias, discrimination, and content filtering, and thus maintain accountability), the right to know when you are dealing with an AI system and not a human, and the right to be forgotten (that is, mandatory deletion of one’s personal data), among others.

Addressing individual rights, however, may not persuade everyone to trust AI systems, especially when AI creators cannot explain precisely the basis for certain actions taken by trained AI technologies. People want to trust that owners and developers of AI systems that use private personal data will employ the best safeguards to protect that data. Trust, but verify, may need to play a role in policy-making efforts even if policies appear to comprehensively address individual rights. Trust might be addressed by imposing specific reporting and disclosure requirements, such as those suggested by federal lawmakers in pending federal autonomous driving legislation.

In the end, however, laws and regulations developed with privacy and other individual rights in mind, that address data security and other concerns people have about trusting their data to AI companies, will invariably include gaps, omissions, and incomplete definitions. The result may be unregulated commercial AI systems, and AI businesses finding workarounds. In such instances, people may have limited options other than to fully opt out, or accept that individual AI technology developers’ work was motivated by ethical considerations and a desire to make something that benefits society. The pressure within many tech companies and startups to push new products out to the world every day, however, could make prioritizing ethical considerations a challenge. Many organizations focused on AI technologies, some of which are listed below, are working to make sure that doesn’t happen.

Rights, trust, and ethical considerations in commercial endeavors can get overshadowed by financial interests and the subjective interests and tastes of individuals. It doesn’t help that companies and policymakers may also feel that winning the race for AI dominance is a factor to be considered (which is not to say that such a consideration is antithetical to protecting individual rights). This underscores the need for thoughtful analysis, sooner rather than later, of the need for laws and regulations directed toward AI technologies.

To learn more about some of these issues, visit the websites of the following organizations, who are active in AI policy research: Access Now, AI Now, and Future of Life.

Congress Takes Aim at the FUTURE of Artificial Intelligence

As the calendar turns over to 2018, artificial intelligence system developers will need to keep an eye on first of its kind legislation being considered in Congress. The “Fundamentally Understanding The Usability and Realistic Evolution of Artificial Intelligence Act of 2017,” or FUTURE of AI Act, is Congress’s first major step toward comprehensive regulation of the AI tech sector.

Introduced on December 22, 2017, companion bills S.2217 and H.R.4625 touch on a host of AI issues, their stated purposes mirroring concerns raised by many about possible problems facing society as AI technologies becomes ubiquitous. The bills propose to establish a federal advisory committee charged with reporting to the Secretary of Commerce on many of today’s hot button, industry-disrupting AI issues.

Definitions

Leaving the definition of “artificial intelligence” open for later modification, both bills take a broad brush at defining, inclusively, what an AI system is, what artificial general intelligence (AGI) means, and what are “narrow” AI systems, which presumably would each be treated differently under future laws and implementing regulations.

Under both measures, AI is generally defined as “artificial systems that perform tasks under varying and unpredictable circumstances, without significant human oversight, or that can learn from their experience and improve their performance,” and encompass systems that “solve tasks requiring human-like perception, cognition, planning, learning, communication, or physical action.” According to the bills’ sponsors, the more “human-like the system within the context of its tasks, the more it can be said to use artificial intelligence.”

While those definitions and descriptions include plenty of ambiguity, characteristic of early legislative efforts, the bills also provide several clarifying examples: AI involves technologies that think like humans, such as cognitive architectures and neural networks; those that act like humans, such as systems that can pass the Turing test or other comparable test via natural language processing, knowledge representation, automated reasoning, and learning; those using sets of techniques, including machine learning, that seek to approximate some cognitive task; and AI technologies that act rationally, such as intelligent software agents and embodied robots that achieve goals via perception, planning, reasoning, learning, communicating, decision making, and acting.

The bills describe AGI as “a notional future AI system exhibiting apparently intelligent behavior at least as advanced as a person across the range of cognitive, emotional, and social behaviors,” which is generally consistent with how many others view the concept of an AGI system.

So-called narrow AI is viewed as an AI system that addresses specific application areas such as playing strategic games, language translation, self-driving vehicles, and image recognition. Plenty of other AI technologies today employ what the sponsors define as narrow AI.

The FUTURE of AI Committee

Both the House and Senate versions would establish a FUTURE of AI advisory committee made up of government and private-sector members tasked with evaluating and reporting on AI issues.

The bills emphasize that the committee should consider accountability and legal rights issues, including identifying where responsibility lies for violations of laws by an AI system, and assessing the compatibility of international regulations involving privacy rights of individuals who are or will be affected by technological innovation relating to AI. The committee will evaluate whether advancements in AI technologies have or will outpace the legal and regulatory regimes implemented to protect consumers, and how existing laws, including those concerning data access and privacy (as discussed here), should be modernized to enable the potential of AI.

The committee will study workforce impacts, including whether and how networked, automated, AI applications and robotic devices will displace or create jobs and how any job-related gains from AI can be maximized. The committee will also evaluate the role ethical issues should take in AI development, including whether and how to incorporate ethical standards in the development and implementation of AI, as suggested by groups such as IEEE’s Global Initiative on Ethics of Autonomous and Intelligent Systems.

The committee will consider issues of machine learning bias through core cultural and societal norms, including how bias can be identified and eliminated in the development of AI and in the algorithms that support AI technologies. The committee will focus on evaluating the selection and processing of data used to train AI, diversity in the development of AI, the ways and places the systems are deployed and the potential harmful outcomes, and how ongoing dialogues and consultations with multi-stakeholder groups can maximize the potential of AI and further development of AI technologies that can benefit everyone inclusively.

The FUTURE of AI committee will also consider issues of competitiveness of the United States, such as how to create a climate for public and private sector investment and innovation in AI, and the possible benefits and effects that the development of AI may have on the economy, workforce, and competitiveness of the United States. The committee will be charged with reviewing AI-related education; open sharing of data and the open sharing of research on AI; international cooperation and competitiveness; opportunities for AI in rural communities (that is, how the Federal Government can encourage technological progress in implementation of AI that benefits the full spectrum of social and economic classes); and government efficiency (that is, how the Federal Government utilizes AI to handle large or complex data sets, how the development of AI can affect cost savings and streamline operations in various areas of government operations, including health care, cybersecurity, infrastructure, and disaster recovery).

Non-profits like AI Now and Future of Life, among others, are also considering many of the same issues. And while those groups primarily rely on private funding, the FUTURE of AI advisory committee will be funded through Congressional appropriations or through contributions “otherwise made available to the Secretary of Commerce,” which may include donation from private persons and non-federal entities that have a stake in AI technology development. The bills limit private donations to less than or equal to 50% of the committee’s total funding from all sources.

The bills’ sponsors says that AI’s evolution can greatly benefit society by powering the information economy, fostering better informed decisions, and helping unlock answers to questions that are presently unanswerable. Their sentiment that fostering the development of AI should be done in a way that maximizes AI’s benefit to society provides a worthy goal for the FUTURE of AI advisory committee’s work. But it also suggests how AI companies may wish to approach AI technology development efforts, especially in the interim period before future legislation becomes law.

Autonomous Vehicles Get a Pass on Federal Statutory Liability, At Least for Now

Consumers may accept “good enough” when it comes to the performance of certain artificial intelligence systems, such as AI-powered Internet search results. But in the case of autonomous vehicles, a recent article in The Economist argues that those same consumers will more likely favor AI-infused vehicles demonstrating the “best” safety record.

If that holds true, a recent Congressional bill directed at autonomous vehicles–the so-called “Safely Ensuring Lives Future Deployment and Research in Vehicle Evolution Act,” or the SELF DRIVE Act (H.R. 3388)–should be well received by safety-conscious consumers. If signed into law, however, H.R. 3388 will require those same consumers to turn to the courts to determine liability and the magnitude of possible damages from vehicle crash events. That’s because the bill as currently written takes a pass on providing a statutory scheme for allocating crash-related liability.

H.R. 3388 passed the House by vote in early September 2017 (a similar bill is working its way in the Senate). Like several earlier proposals made public by the House Energy and Commerce Committee in connection with hearings in June 2017, the resolution is one of the first federal attempts at closely regulating AI systems embodied in a major consumer product (at the state level, at least twenty states have enacted laws regarding some aspect of self-driving vehicles). The stated purpose of the SELF DRIVE Act is to memorialize the Federal role in ensuring the safety of highly automated vehicles as it relates to design, construction, and performance, by encouraging the testing and deployment of such vehicles.

Section 8 of the bill is notable in that it would require future rulemaking to require manufacturers to inform consumers of the capabilities and limitations of a vehicle’s “driving automation system.” The bill would define “automated driving system” as “the hardware and software that are collectively capable of performing the entire dynamic driving task on a sustained basis, regardless of whether such system is limited to a specific operational design domain.” The bill would define “dynamic driving task” as “the real time operational and tactical functions required to operate a vehicle in on-road traffic,” including monitoring the driving environment via object and event detection, recognition, classification, and response preparation and object and event response execution.

Requiring manufacturers to inform consumers of the “capabilities and limitations” of a vehicle’s “driving automation system,” combined with published safety statistics, might steer educated consumers toward a particular make and model, much like other vehicle features like lane departure warning and automatic braking features do. In the case of liability for crashes, however, H.R. 3388 would amend existing federal laws to clarify that “compliance with a motor vehicle safety standard…does not exempt a person from liability at common law” and common law claims are not preempted.

In other words, vehicle manufacturers who meets all of H.R. 3388’s express standards (and future regulatory standards, which the bill mandates be written by the Department of Transportation and other federal agencies) could still be subject to common law causes of action, just as they are today.

Common law refers to the body of law developed over time by judges in the course of applying, to a set of facts and circumstances, relevant legal principles developed in previous court decisions (i.e., precedential decisions). Common law liability considers which party should be held responsible (and thus should pay damages) to another party who alleges some harm. Judicial common law decisions are thus generally viewed as being limited to a case’s specific facts and circumstances. Testifying before the House Committee on June 27, 2017, George Washington University Law School’s Alan Morrison described one of the criticisms lodged against relying solely on common law approaches to regulating autonomous vehicles and assessing liability: common law develops slowly over time.

“Traditionally, auto accidents and product liability rules have been matters of state law, generally developed by state courts, on a case by case basis,” Morrison said in prepared remarks for the record during testimony back in June. “Some scholars and others have suggested that [highly autonomous vehicles, HAVs] may be an area, like nuclear power was in the 1950s, in which liability laws, which form the basis for setting insurance premiums, require a uniform national liability answer, especially because HAVs, once they are deployed, will not stay within state boundaries. They argue that, in contrast to common law development, which can progress very slowly and depends on which cases reach the state’s highest court (and when), legislation can be acted on relatively quickly and comprehensively, without having to wait for the ‘right case’ to establish the [common] law.”

For those hoping Congress would use H.R. 3388 as an opportunity to issue targeted statutory schemes containing specific requirements covering the performance and standards for AI-infused autonomous vehicles, which might provide guidance for AI developers in many other industries, the resolution may be viewed as disappointing. H.R. 3388 leaves unanswered questions about who should be liable in cases where complex hardware-software systems contribute to injury or simply fail to work as advertised. Autonomous vehicles rely on sensors for “monitoring the driving environment via object and event detection” and software trained to identify objects from that data (i.e., “object and event…recognition, classification, and response preparation”). Should a sensor manufacturer be held liable if, for example, its sensor sampling rate is too slow and its field of vision too narrow, or the software provider who trained its computer vision algorithm on data from 50,000 vehicle miles traveled instead of 100,000, or the vehicle manufacturer who installed those hardware and software components? What if a manufacturer decides not to inform consumers of those limitations in its statement of “capabilities and limitations” of its “driving automation systems”? Should a federal law even attempt to set such detailed, one size fits all standards? As things stand now, answers to these questions may become apparent only after courts consider them in the course of deciding liability in common law injury and product liability cases.

The Economist authors predict that companies whose AI is behind the fewest autonomous vehicle crashes “will enjoy outsize benefits.” Quantifying those benefits, however, may need to wait until after potential liability issues in AI-related cases become clearer over time.

How Privacy Law’s Beginnings May Suggest An Approach For Regulating Artificial Intelligence

A survey conducted in April 2017 by Morning Consult suggests most Americans are in favor of regulating artificial intelligence technologies. Of 2,200 American adults surveyed, 71% said they strongly or somewhat agreed that there should be national regulation of AI, while only 14% strongly or somewhat disagreed (15% did not express a view).

Technology and business leaders speaking out on whether to regulate AI fall into one of two camps: those who generally favor an ex post, case-by-case, common law approach, and those who prefer establishing a statutory and regulatory framework that, ex ante, sets forth clear do’s and don’ts and penalties for violations. (If you’re interested in learning about the challenges of ex post and ex ante approaches to regulation, check out Matt Scherer’s excellent article, “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies,” published in the Harvard Journal of Law and Technology (2016)).

Advocates for a proactive regulatory approach caution that the alternative is fraught with predictable danger. Elon Musk for one, notes that, “[b]y the time we’re reactive in A.I., regulation’s too late.” Others, including leaders of some of the biggest AI technology companies in the industry, backed by lobbying organizations like the Information Technology Industry Council (ITI), feel that the hype surrounding AI does not justify quick Congressional action at this time.

Musk criticized this wait-and-see approach. “Normally, the way regulation’s set up,” he said, “a whole bunch of bad things happen, there’s a public outcry, and then after many years, a regulatory agency is set up to regulate that industry. There’s a bunch of opposition from companies who don’t like being told what to do by regulators, and it takes forever. That in the past has been bad but not something which represented a fundamental risk to the existence of civilization.”

Assuming AI regulation is inevitable, how should regulators (and legislators) approach such a formidable task? After all, AI technologies come in many forms, and their uses extend across multiple industries, including some already burdened with regulation. The history of privacy law may provide the answer.

Without question, privacy concerns, and privacy laws, touch on AI technology use and development. That’s because so much of today’s human-machine interactions involving AI are powered by user-provided or user-mined data. Search histories, images people appear in on social media, purchasing habits, home ownership details, political affiliations, and many other data points are well-known to marketers and others whose products and services rely on characterizing potential customers using, for example, machine learning algorithms, convolutional neural networks, and other AI tools. In the field of affective computing, human-robot and human-chatbot interactions are driven by a person’s voice, facial features, heart rate, and other physiological features, which are the percepts that the AI system collects, processes, stores, and uses when deciding actions to take, such as responding to user queries.

Privacy laws evolved from a period during late nineteenth century America when journalists were unrestrained in publishing sensational pieces for newspapers or magazines, basically the “fake news” of the time. This Yellow Journalism, as it was called, prompted legal scholars to express a view that people had a “right to be let alone,” setting in motion the development of a new body of law involving privacy. The key to regulating AI, as it was in the development of regulations governing privacy, may be the recognition of a specific personal right that is, or is expected to be, infringed by AI systems.

In the case of privacy, attorneys Samuel Warren and Louis Brandeis (later, Justice Brandeis) were the first to articulate a personal privacy right. In The Right of Privacy, published in the Harvard Law Review in 1890, Warren and Brandeis observed that “the press is overstepping in every direction the obvious bounds of propriety and of decency. Gossip…has become a trade.” They contended that “for years there has been a feeling that the law must afford some remedy for the unauthorized circulation of portraits of private persons.” They argued that a right of privacy was entitled to recognition because “in every [] case the individual is entitled to decide whether that which is his shall be given to the public.” A violation of the person’s right of privacy, they wrote, should be actionable.

Soon after, courts began recognizing the right of privacy in civil cases. By 1960, in his seminal review article entitled Privacy (48 Cal.L.Rev 383), William Prosser wrote, “In one form or another,” the right of privacy “was declared to exist by the overwhelming majority of the American courts.” That led to uniform standards. Some states enacted limited or sweeping state-specific statutes, replacing the common law with statutory provisions and penalties. Federal appeals courts weighed in when conflicts between state law arose. This slow progression from initial recognition of a personal privacy right in 1890, to today’s modern statutes and expansive development of common law, won’t appeal to those pushing for regulation of AI now.

Even so, the process has to begin somewhere, and it could very well start with an assessment of the personal rights that should be recognized arising from interactions with or the use of AI technologies. Already, personal rights recognized by courts and embodied in statutes apply to AI technologies. But there is one personal right, potentially unique to AI technologies, that has been suggested: the right to know why (or how) an AI technology took a particular action (or made a decision) affecting a person.

Take, for example, an adverse credit decision by a bank that relies on machine learning algorithms to decide whether a customer should be given credit. Should that customer have the right to know why (or how) the system made the credit-worthiness decision? FastCompany writer Cliff Kuang explored this proposition in his recent article, “Can A.I. Be Taught to Explain Itself?” published in the New York Times (November 21, 2017).

If AI could explain itself, the banking customer might want to ask it what kind of training data was used and whether the data was biased, or whether there was an errant line of python coding to blame, or whether the AI gave the appropriate weight to the customer’s credit history. Given the nature of AI technologies, some of these questions, and even more general ones, may only be answered by opening the AI black box. But even then it may be impossible to pinpoint how the AI technology made its decision. In Europe, “tell me why/how” regulations are expected to become effective in May 2018. As I will discuss in a future post, many practical obstacles face those wishing to build a statute or regulatory framework around the right of consumers to demand from businesses that their AI explain why it made or took a particular adverse action.

Regulation of AI will likely happen. In fact, we are already seeing the beginning of direct legislative/regulatory efforts aimed at the autonomous driving industry. Whether interest in expanding those efforts to other AI technologies grows or lags may depend at least in part on whether people believe they have personal rights at stake in AI, and whether those rights are being protected by current laws and regulations.

Do Artificial Intelligence Technologies Need Regulating?

At some point, yes. But when? And how?

Today, AI is largely unregulated by federal and state governments. That may change as technologies incorporating AI continue to expand into communications, education, healthcare, law, law enforcement, manufacturing, transportation, and other industries, and prominent scientists as well as lawmakers continue raising concerns about unchecked AI.

The only Congressional proposals directly aimed at AI technologies so far have been limited to regulating Highly Autonomous Vehicles (HAVs, or self-driving cars). In developing those proposals, the House Energy and Commerce Committee brought stakeholders to the table in June 2017 to offer their input. In other areas of AI development, however, technologies are reportedly being developed without the input of those whose knowledge and experience might provide acceptable and appropriate direction.

Tim Hwang, an early adopter of AI technology in the legal industry, says individual artificial intelligence researchers are “basically writing policy in code” that reflects personal perspectives or biases. Kate Darling, the co-founder of AI Now and an intellectual property attorney, speaking with Wired magazine, assessed the problem this way: “Who gets a seat at the table in the design of these systems? At the moment, it’s driven by engineering and computer science experts who are designing systems that touch everything from criminal justice to healthcare to education. But in the same way that we wouldn’t expect a federal judge to optimize a neural network, we shouldn’t be expecting an engineer to understand the workings of the criminal justice system.”

“Who gets a seat at the table in the design of these systems? At the moment, it’s driven by engineering and computer science experts who are designing systems that touch everything from criminal justice to healthcare to education. But in the same way that we wouldn’t expect a federal judge to optimize a neural network, we shouldn’t be expecting an engineer to understand the workings of the criminal justice system.”

Those concerns frame part of the debate over regulating the AI industry, but timing is another big question. Shivon Zilis, fund investor at Bloomberg Beta, cautions that AI technology is here and will become a very powerful technology, so the public discussion of regulation needs to happen now. Others, like Alphabet chairman Eric Schmidt, considers the government regulation debate premature.

A fundamental challenge for Congress and government regulators is how to regulate AI. As AI technologies advance from the simple to the super-intelligent, a one size fits all regulatory approach could cause more problems than it addresses. On the one end of the AI technology spectrum, simple AI systems may need little regulatory oversight. But on the other end of the spectrum, super-intelligent autonomous systems may be viewed as having rights, and thus a focused set of regulations may be more appropriate. The Information Technology Industry Council (ITI), a lobbying group, “encourage[s] governments to evaluate existing policy tools and use caution before adopting new laws, regulations, or taxes that may inadvertently or unnecessarily impede the responsible development and use of AI.”

Regulating the AI industry will require careful thought and planning. Government regulations are hard to get right, and they rarely please everyone. Regulate too much and economic activity can be stifled. Regulate too little (or not at all) and the consequences could be worse. Congress and regulators will also need to assess the impacts of AI-specific regulations on an affected industry years and decades down the road, a difficult task when market trends and societal acceptance of AI will likely alter the trajectory of the AI industry in possibly unforeseen ways.

But we may be getting ahead of ourselves. Kate Darling recently noted that stakeholders have not yet agreed on basic definitions for AI. For example, there is not even a universally-accepted definition today for what is a “robot.”

Sources:
June 2017 House Energy and Commerce Committee, Hearings on Self-Driving Cars

Wired Magazine: Why AI is Still Waiting for its Ethics Transplant

TechCrunch

Futurism

Gizmodo