Legislators, Stockholders, Civil Right Groups, and a CEO Seek Limits on AI Face Recognition Technology

Following the tragic killings of journalists and staff inside the Capital Gazette offices in Annapolis, Maryland, in late June, local police acknowledged that the alleged shooter’s identity was determined using a facial recognition technology widely deployed by Maryland law enforcement personnel.  According to DataWorks Plus, the company contracted to support the Maryland Image Repository System (MIRS) used by Anne Arundel County Police in its investigation, its technology uses face templates derived from facial landmark points extracted from image face data to digitally compare faces to a large database of known faces.  More recent technology, relying on artificial intelligence models, have led to even better and faster image and video analysis used by federal and state law enforcement for facial recognition purposes.  AI-based models can process images and video captured by personal smartphones, laptops, home or business surveillance cameras, drones, and government surveillance cameras, including body-worn cameras used by law enforcement personnel, making it much easier to remotely identify and track objects and people in near-real time.

Recently, facial recognition use cases have led to privacy and civil liberties groups to speak out about potential abuses, with a growing vocal backlash aimed at body-worn cameras and facial recognition technology used in law enforcement surveillance.  Much of the concern centers around the lack of transparency in the use of the technology, potential issues of bias, and the effectiveness of the technology itself.  This has spurred state legislators in several states to seek to impose oversight, transparency, accountability, and other limitations on the tech’s uses.  Some within the tech industry itself have even gone so far as to place self-imposed limits on uses of their software for face data collection and surveillance activities.

Maryland and California are examples of two states whose legislators have targeted law enforcement’s use of facial recognition in surveillance.  In California, state legislators took a recent step toward regulating the technology when SB-1186 was passed by its Senate on May 25, 2018.  In remarks accompanying the bill, legislators concluded that “decisions about whether to use ‘surveillance technology’ for data collection and how to use and store the information collected should not be made by the agencies that would operate the technology, but by the elected bodies that are directly accountable to the residents in their communities who should also have opportunities to review the decision of whether or not to use surveillance technologies.”

If enacted, the California law would require, beginning July 1, 2019, law enforcement to submit a proposed Surveillance Use Policy to an elected governing body, made available to the public, to obtain approval for the use of specific surveillance technologies and the information collected by those technologies.  “Surveillance technology” is defined in the bill to include any electronic device or system with the capacity to monitor and collect audio, visual, locational, thermal, or similar information on any individual or group. This includes, drones with cameras or monitoring capabilities, automated license plate recognition systems, closed-circuit cameras/televisions, International Mobile Subscriber Identity (IMSI) trackers, global positioning system (GPS) technology, software designed to monitor social media services or forecast criminal activity or criminality, radio frequency identification (RFID) technology, body-worn cameras, biometric identification hardware or software, and facial recognition hardware or software.

The bill would prohibit a law enforcement agency from selling, sharing, or transferring information gathered by surveillance technology, except to another law enforcement agency. The bill would provide that any person could bring an action for injunctive relief to prevent a violation of the law and, if successful, could recover reasonable attorney’s fees and costs.  The bill would also establish procedures to ensure that the collection, use, maintenance, sharing, and dissemination of information or data collected with surveillance technology is consistent with respect for individual privacy and civil liberties, and that any approved policy be publicly available on the approved agency’s Internet web site.

With the relatively slow pace of legislative action, at least compared to the speed at which face recognition technology is advancing, some within the tech community have taken matters into their own hands.  Brian Brakeen, for example, CEO of Miami-based facial recognition software company Kairos, recently decided that his company’s AI software will not be made available to any government, “be it America or another nation’s.”  In a TechCrunch opinion published June 24, 2018, Brakeen said, “Whether or not you believe government surveillance is okay using commercial facial recognition in law enforcement is irresponsible and dangerous” because it “opens the door for gross misconduct by the morally corrupt.”  His position is rooted in the knowledge of how advanced AI models like his are created: “[Facial recognition] software is only as smart as the information it’s fed; if that’s predominantly images of, for example, African Americans that are ‘suspect,’ it could quickly learn to simply classify the black man as a categorized threat.”

Kairos is not alone in calling for limits.  A coalition of organizations against facial recognition surveillance published a letter on May 22, 2018, to Amazon’s CEO, Jeff Bezos, in which the signatories demanded that “Amazon stop powering a government surveillance infrastructure that poses a grave threat to customers and communities across the country. Amazon should not be in the business of providing surveillance systems like Rekognition to the government.”  The organizations–civil liberties, academic, religious, and others–alleged that “Amazon Rekognition is primed for abuse in the hands of governments. This product poses a grave threat to communities,” they wrote, “including people of color and immigrants….”

Amazon’s Rekognition system, first announced in late 2016., is a cloud-based platform for performing image and video analysis without the user needing a background in machine learning, a type of AI.  Among its many uses today, Rekognition reportedly allows a user to conduct near real-time automated face recognition, analysis, and face comparisons (assessing the likelihood that faces in different images are the same person), using machine learning models.

A few weeks after the coalition letter dropped, another group, this one a collection of individual and organizational Amazon shareholders, issued a similar letter to Bezos.  In it, the shareholders alleged that “[w]hile Rekognition may be intended to enhance some law enforcement activities, we are deeply concerned it may ultimately violate civil and human rights.”  Several Microsoft employees took a similar stand against Microsoft’s role in its software used by government agencies.

As long as questions surrounding transparency, accountability, and fairness in the use of face recognition technology in law enforcement continue to be raised, tech companies, legislators, and stakeholders will likely continue to react in ways that address immediate concerns.  This may prove effective in the short-term, but no one today can say what AI-based facial detection and recognition technologies will look like in the future or to what extent the technology will be used by law enforcement personnel.

Senate-Passed Defense Authorization Bill Funds Artificial Intelligence Programs

The Senate-passed national defense appropriations bill (H.R.5515, as amended), to be known as the John S. McCain National Defense Authorization Act for Fiscal Year 2019, includes spending provisions for several artificial intelligence technology programs.

Passed by a vote of 85-10 on June 18, 2018, the bill would include appropriations for the Department of Defense “to coordinate the efforts of the Department to develop, mature, and transition artificial intelligence technologies into operational use.” A designated Coordinator will serve to oversee joint activities of the services in the development of a Strategic Plan for AI-related research and development.  The Coordinator will also facilitate the acceleration of development and fielding of AI technologies across the services.  Notably, the Coordinator is to develop appropriate ethical, legal, and other policies governing the development and use of AI-enabled systems in operational situations. Within one year of enactment, the Coordinator is to complete a study on the future of AI in the context of DOD missions, including recommendations for integrating “the strengths and reliability of artificial intelligence and machine learning with the inductive reasoning power of a human.”

In other provisions, the Director of the Defense Intelligence Agency (DIA; based in Ft. Meade, MD) is tasked with submitting a report to Congress within 90 days of enactment that directly compares the capabilities of the US in emerging technologies (including AI) and the capabilities of US adversaries in those technologies.

The bill would require the Under Secretary for R&D to pilot the use of machine-vision technologies to automate certain human weapons systems manufacturing tasks. Specifically, tests would be conducted to assess whether computer vision technology is effective and at a level of readiness to perform the function of determining the authenticity of microelectronic parts at the time of creation through final insertion into weapon systems.

The Senate version of the 2019 appropriations bill replaces an earlier House version (passed 351-66 on May 24, 2018).

At the Intersection of AI, Face Swapping, Deep Fakes, Right of Publicity, and Litigation

Websites like GitHub, Reddit and others offer developers and hobbyists dozens of repositories containing artificial intelligence deep learning models, instructions for their use, and forums for learning how to “face swap,” a technique used to automatically replace a face of a person in a video with that of a different person. Older versions of face swapping, primarily used on images, have been around for years in the form of entertaining apps that offered results with unremarkable quality (think cut and paste at its lowest, and photoshop editing at a higher level). With the latest AI models, however, including deep neural networks, a video with a face-swapped actor–so-called “deep fake” videos–may appear so seamless and uncanny as to fool even the closest of inspections, and the quality is apparently getting better.

With only subtle clues to suggest an actor in one of these videos is fake, the developers behind them have become the target of criticism, though much of the criticism has also been leveled generally at the AI tech industry, for creating new AI tools with few restrictions on potential uses beyond their original intent.  These concerns have now reached the halls of New York’s state legislative body.

New York lawmakers are responding to the deep fake controversy, albeit in a narrow way, by proposing to make it illegal to use “digital replicas” of individuals without permission, a move that would indirectly regulate AI deep learning models. New York Assembly Bill No. A08155 (introduced in 2017, amended Jun. 5, 2018) is aimed at modernizing New York’s right of publicity law (N.Y. Civ. Rights Law §§ 50 and 50-1)–one of the nation’s oldest publicity rights laws that does not provide post-mortem publicity rights–though it may do little to curb the broader proliferation of face swapped and deep fake videos. In fact, only a relatively small slice of primarily famous New York actors, artists, athletes, and their heirs and estates would benefit from the proposed law’s digital replicas provision.

If enacted, New York’s right of publicity law would be amended to address computer-generated or electronic reproductions of a living or deceased individual’s likeness or voice that “realistically depicts” the likeness or voice of the individual being portrayed (“realistic” is undefined). Use of a digital replica would be a violation of the law if done without the consent of the individual, if the use is in a scripted audiovisual or audio work (e.g., movie or sound recording), or in a live performance of a dramatic work, that is intended to and creates the clear impression that the individual represented by the digital replica is performing the activity for which he or she is known, in the role of a fictional character.

It would also be a violation of the law to use a digital replica of a person in a performance of a musical work that is intended to and creates the clear impression that the individual represented by the digital replica is performing the activity for which he or she is known, in such musical work.

Moreover, it would be a violation to use a digital replica of a person in an audiovisual work that is intended to and creates the clear impression that an athlete represented by the digital replica is engaging in an athletic activity for which he or she is known.

The bill would exclude, based on First Amendment principles, a person’s right to control their persona in cases of parody, satire, commentary, and criticism; political, public interest, or newsworthy situations, including a documentary, regardless of the degree of fictionalization in the work; or in the case of de minimis or incidental uses.

In the case of deep fake digital replicas, the bill would make it a violation to use a digital replica in a pornographic work if done without the consent of the individual if the use is in an audiovisual pornographic work in a manner that is intended to and creates the impression that the individual represented by the digital replica is performing.

Similar to the safe harbor provisions in other statutes, the New York law would provide limited immunity to any medium used for advertising including, but not limited to, newspapers, magazines, radio and television networks and stations, cable television systems, billboards, and transit advertising, that make unauthorized use of an individual’s persona for the purpose of advertising or trade, unless it is established that the owner or employee had knowledge of the unauthorized use, through presence or inclusion, of the individual’s persona in such advertisement or publication.

Moreover, the law would provide a private right of action for an injured party to sue for an injunction and to seek damages. Statutory damages in the amount of $750 would be available, or compensatory damages, which could be significantly higher.  The finder of fact (judge or jury) could also award significant “exemplary damages,” which could be substantial, to send a message to others not to violate the law.

So far, AI tech developers have largely avoided direct legislative or regulatory action targeting their AI technologies, in part because some have taken steps to self-regulate, which may be necessary to avoid the confines of command and control-style state or federal regulatory schemes that would impose standards, restrictions, requirements, and the right to sue to collect damages and collect attorneys’ fees. Tech companies efforts at self-regulating, however, have been limited to expressing carefully-crafted AI policies for themselves and their employees, as well as taking a public stance on issues of bias, ethics, and civil rights impacts from AI machine learning. Despite those efforts, more laws like New York’s may be introduced at the state level if AI technologies are used in ways that have questionable utility or social benefits.

For more about the intersection of right of publicity laws and regulating AI technology, please see an earlier post on this website, available here.

Congress Looking at Data Science for Ways to Improve Patent Operations

When Congress passed the sweeping Leahy-Smith America Invents Act (AIA) on September 16, 2011, legislators weren’t concerned about how data analytics might improve efficiencies at one of the Commerce Department’s most data-heavy institutions: the US Patent Office. Patent reformers at the time were instead focused on curtailing patent troll litigation and conforming aspects of US patent law to those of other countries. Consequently, the Patent Office’s trove of pre-classified, pre-labeled, and semi-structured patent application and invention data–information ripe for big data analytics–remained mostly untapped at the time.

Fast forward to 2018 and Congress has finally put patent data in its cross-hairs. Now, Congress wants to see how “advanced data science analytics” techniques, such as artificial intelligence, machine learning, and other methods, could be used to analyze patent data and make policy recommendations. If enacted, the “Building Innovation Growth through Data for Intellectual Property Act” or the “BIG Data for IP Act” of 2018 (S. 2601; sponsored by Sen. Coons and Sen. Hatch) would require an investigation into how data science could help the Patent Office understand its current capabilities and whether its information technology systems need modernizing.

Those objectives, however, may be too narrow.  Silicon Valley tech companies, legal tech entrepreneurs, and even students have already seized upon the opportunities big patent data and machine learning techniques present, and, as a result, have developed interesting and useful capabilities.

Take, for example, the group of Stanford University students who in late 2011 developed a machine learning technique to automatically classify US patent applications based on an application’s written invention description. The students, part of Stanford’s CS229 Machine Learning class, proposed their solution around the same time Senators Leahy, Smith, and the rest of Congress were debating the AIA in the fall of 2011.  More recently, AI technologies used by companies like Cloem, AllPriorArt, AllPriorClaims, RoboReview, Specif.io, and others have shown how patent data and AI can augment traditional patent practitioner’s roles in the legal services industry.

Some of these AI tools may one day reduce much of the work patent practitioners have traditionally performed and could lead to fewer Examiners at the Patent Office whose jobs are to review patent applications for patentability. Indeed, some have imagined a world in which advanced machine learning models conceive inventions and prepare and file a patent application to protect those ideas without further human input.  In the future, advanced machine learning models, trained on the “prior art” patent data, could routinely examine patent applications for patentability, thus eliminating the need for costly and time-consuming inter partes reviews (a trial-like proceeding that has created much uncertainty since enactment of the AIA).

So perhaps Congress’ BIG Data for IP Act should focus less on how advanced data analytics can be used to “improve consistency, detect common sources of error, and improve productivity,” as the bill is currently written, and focus more globally on how patent data, powering new AI models, will disrupt Patent Office operations, the very nature of innovation, and how patent applications are prepared, filed, and examined.

Republicans Propose Commission to Study Artificial Intelligence Impacts on National Security

Three Republican members of Congress are co-sponsoring a new bill (H.R. 5356) “To establish the National Security Commission on Artificial Intelligence.” Introduced by Rep. Stefanik (R-NY) on March 20, 2018, the bill would create a temporary 11-member Commission tasked with producing an initial report followed by comprehensive annual reports, each providing issue-specific recommendations about national security needs and related risks from advances in artificial intelligence, machine learning, and associated technologies.

Issues the Commission would review include AI competitiveness in the context of national and economic security, means to maintain a competitive advantage in AI (including machine learning and quantum computing), other country AI investment trends, workforce and education incentives to boost the number of AI workers, risks of advances in the military employment of AI by foreign countries, ethics, privacy, and data security, among others.

Unlike other Congressional bills of late (see H.R. 4625–FUTURE of AI Act; H.R. 4829–AI JOBS Act) that propose establishing committees under Executive Branch departments and constituted with both government employees and private citizens, H.R. 5356 would establish an independent Executive Branch commission made up exclusively of Federal employees appointed by Department of Defense and various Armed Services Committee members, with no private citizen members (ostensibly because of national security and security clearance issues).

Congressional focus on AI technologies has generally been limited to highly autonomous vehicles and vehicle safety, with other areas, such as military impacts, receiving much less attention. By way of contrast, the UK’s Parliament seems far ahead. The UK Parliament Select Committee on AI has already met over a dozen times since mid-2017 and its members have convened numerous public meetings to hear from dozens of experts and stakeholders representing various disciplines and economic sectors.

In Your Face Artificial Intelligence: Regulating the Collection and Use of Face Data (Part I)

Of all the personal information individuals agree to provide companies when they interact with online or app services, perhaps none is more personal and intimate than a person’s facial features and their moment-by-moment emotional states. And while it may seem that face detection, face recognition, and affect analysis (emotional assessments based on facial features) are technologies only sophisticated and well-intentioned tech companies with armies of data scientists and stack engineers are competent to use, the reality is that advances in machine learning, microprocessor technology, and the availability of large datasets containing face data have lowered entrance barriers to conducting robust face detection, face recognition, and affect analysis to levels never seen before.

In fact, anyone with a bit of programming knowledge can incorporate open-source algorithms and publicly available image data, train a model, create an app, and start collecting face data from app users. At the most basic entry point, all one really needs is a video camera with built-in face detection algorithms and access to tagged images of a person to start conducting facial recognition. And several commercial API’s exist making it relatively easy to tap into facial coding databases for use in assessing other’s emotional states from face data. If you’re not persuaded by the relative ease at which face data can be captured and used, just drop by any college (or high school) hackathon and see creative face data tech in action.

In this post, the uses of face data are considered, along with a brief summary of the concerns raised about collecting and using face and emotional data. Part II will explore options for face data governance, which include the possibility of new or stronger laws and regulations and policies that a self-regulating industry and individual stakeholders could develop.

The many uses of our faces

Today’s mobile and fixed cameras and AI-based face detection and recognition software enable real-time controlled access to facilities and devices. The same technology allows users to identify fugitive and missing persons in surveillance videos, private citizens interacting with police, and unknown persons of interest in online images.

The technology provides a means for conducting and verifying commercial transactions using face biometric information, tracking people automatically while in public view, and extracting physical traits from images and videos to supplement individual demographic, psychographic, and behavioristic profiles.

Face software and facial coding techniques and models are also making it easier for market researchers, educators, robot developers, and autonomous vehicle safety designers to assess emotional states of people in human-machine interactions.

These and other use cases are possible in part because of advances in camera technology, the proliferation of cameras (think smart phones, CCTVs, traffic cameras, laptop cameras, etc.) and social media platforms, where millions of images and videos are created and uploaded by users every day. Increased computer processing power has led to advances in face recognition and affect-based machine learning research and improved the ability of complex models to execute faster. As a result, face data is relatively easy to collect, process, and use.

One can easily image the many ways face data might be abused, and some of the abuses have already been reported. Face data and machine learning models have been improperly used to create pornography, for example, and to track individuals in stores and other public locations without notice and without seeking permission. Models based on face data have been reportedly developed for no apparent purpose other than for predictive classification of beauty and sexual orientation.

Face recognition models are also subject to errors. Misidentification, for example, is a weakness of face recognition and affect-based models. In fact, despite improvements, face recognition is not perfect. This can translate into false positive identifications. Obviously, tragic consequences can occur if the police or government agencies make decisions based on a false positive (or false negative) identity of a person.

Face data models have been shown to perform more accurately on persons with lighter skin color. And affect models, while raising fewer concerns compared to face recognition due mainly to the slower rate of adoption of the technology, may misinterpret emotions if culture, geography, gender, and other factors are not accounted for in training data.

Of course, instances of reported abuse, bias, and data breaches overshadow the many unreported positive uses and machine learning applications of face data. But as is often the case, problems tend to catch the eyes of policymakers, regulators, and legislators, though overreaction to hyped problems can result in a patchwork of regulations and standards that go beyond addressing the underlying concerns and cause unintended effects, such as possibly stifling innovation and reducing competitiveness.

Moreover, reactionary regulation doesn’t play well with fast-moving disruptive tech, such as face recognition and affective computing, where the law seems to always be in catch-up mode. Compounding the governance problem is the notion that regulators and legislators are not crystal ball readers who can see into the future. Indeed, future uses of face data technologies may be hard to imagine today.

Even so, what matters to many is what governments and companies are doing with still images and videos, and specifically how face data extracted from media are being used, sometimes without consent. These concerns raise questions of transparency, privacy laws, terms of service and privacy policy agreements, data ownership, ethics, and data breaches, among others. They also implicate issues of whether and when federal and state governments should tighten existing regulations and impose new regulations where gaps exist in face data governance.

With recent data breaches making headlines and policymakers and stakeholders gathering in 2018 to examine AI’s impacts, there is no better time than now to revisit the need for stronger laws and to develop new technical- and ethical-based standards and guidelines applicable to face data. The next post will explore these issues.

A Proposed AI Task Force to Confront Talent Shortage and Workforce Changes

Just over a month after House and Senate commerce committees received companion bills recommending a federal task force to globally examine the “FUTURE” of Artificial Intelligence in the United States (H.R. 4625; introduced Dec. 12, 2017), a House education and workforce committee is set to consider a bill calling for a task force assessment of the impacts of AI technologies on the US workforce.

If enacted, the “Artificial Intelligence Job Opportunities and Background Summary Act of 2018,” or the “AI JOBS Act of 2018” (H.R. 4829; introduced Jan. 18, 2018), would require the Secretary of Labor to report on impacts and growth of AI, industries and workers who may be most impacted by AI, expertise and education needed in an AI economy (compared to today), an identification of workers who will experience expanded career opportunities from AI and those who may be vulnerable to career displacement, and ways to alleviate workforce displacement and prepare a future AI workforce.

Assessing these issues now is critical. Former Senator Tom Daschle and David Beier, in a recent opinion published in The Hill, see a “dramatic set of changes” in the nature of work in America as AI technologies become more entrenched in the US economy. Citing a McKinsey’s Global Institute’s study of 800 occupations, Daschle and Beier conclude that AI technologies will not cause net job losses. Rather, job losses will likely be offset by job changes and gains in fields such as healthcare, infrastructure development, energy, and in fields that do not exist today. They cite Gartner Research estimates suggesting millions of new jobs will be created directly or indirectly as a result of the AI economy.

Already there are more AI-related jobs than high-skilled workers to fill them. One popular professional networking site currently lists over 6,000 “artificial intelligence” jobs. Chinese internet giant Tencent estimates there are only 300,000 AI experts worldwide (recent estimates by Toronto-based Element AI puts that figure at merely 90,000 AI experts). In testimony this week before a House Information Technology subcommittee, Intel’s CTO Amir Khosrowshahi said that, “Workers need to have the right skills to create AI technologies and right now we have too few workers to do the job.” Huge salaries for newly-minted computer science PhDs will drive more to the field, but job openings are likely to outpace available talent even as record numbers of students enroll in machine learning and related AI classes at top US universities.

If AI job gains shift workers disproportionately toward high-skilled jobs, the result may be continued job opportunity inequality. A 2016 study by Georgetown University’s Center on Education and the Workforce found that “out of the 11.6 million jobs created in the post-recession economy, 11.5 million went to workers with at least some college education.” The study authors found that, since 2008, graduate degree workers had the most job gains (83%), predominantly in high-skill occupations, and college graduates saw the next highest job gains (57%), also in high-skill jobs. The highest job growth was seen in management, healthcare, and computer and mathematical sciences. These same fields are prime for a future influx of highly-skilled AI workers.

The US is not alone in raising concerns about job and workforce changes in an AI economy. The UK Parliament’s Artificial Intelligence Committee, for example, is confronting challenges in re-educating UK’s workforce to improve skills needed to work alongside AI systems. The US may need to do more to catch up, according to Mr. Khosrowshahi. “Current federal funding levels [in tech education],” he argued, “are not keeping pace with the rest of the industrialized world.”

The AI JOBS Act of 2018 presents an opportunity for US policymakers to develop novel approaches to address expected workforce shifts caused by an AI economy. If nothing is done, the US could find itself at a competitive disadvantage with increasing economic inequality.

Recognizing Individual Rights: A Step Toward Regulating Artificial Intelligence Technologies

In the movie Marjorie | Prime (August 2017), John Hamm plays an artificial intelligence version of Marjorie’s deceased husband, visible to Marjorie as a holographic projection in her beachfront home. As Marjorie (played by Lois Smith) interacts with Hamm’s Prime through a series of one-on-one conversations, the AI improves its cognition by observing and processing Marjorie’s emotional expressions, movements, and speech. The AI also learns from interactions with Marjorie’s son-in-law (Tim Robbins) and daughter (Geena Davis), as they recount highly personal and painful episodes of their lives. Through these interactions, Prime ends up possessing a collective knowledge greater and more personal and intimate than Marjorie’s original husband ever had.

Although not directly explored in the movie’s arc, the futuristic story touches on an important present-day debate about the fate of private personal data being uploaded to commercial and government AI systems, data that theoretically could persist in a memory device long after the end of the human lives from which the data originated, for as long as its owner chooses to keep it. It also raises questions about the fate of knowledge collected by other technologies perceiving other people’s lives, and to what extent these percepts, combined with people’s demographic, psychographic, and behavioristic characteristics, would be used to create sharply detailed personality profiles that companies and governments might abuse.

These are not entirely hypothetical issues to be addressed years down the road. Companies today provide the ability to create digital doppelgangers, or human digital twins, using AI technologies. And collecting personal information from people on a daily basis as they interact with digital assistants and other connected devices is not new. But as Marjorie|Prime and several non-cinematic AI technologies available today illustrate, AI systems allow the companies who build them unprecedented means for receiving, processing, storing, and taking actions based on some of the most personal information about people, including information about their present, past, and trending or future emotional states, which marketers for years have been suggesting are the keys to optimizing advertising content.

Congress recently acknowledged that “AI technologies are rapidly evolving in capability and application throughout society,” but the US currently has no federal policy towards AI and no part of the federal government has ownership of the advancement of AI technologies. Left unchecked in an unregulated market, as is largely the case today, AI technological advancements may trend in a direction that may be inconsistent with collective values and goals.

Identifying individual rights

One of the first questions those tasked with developing laws, regulations, and policies directed toward AI should ask is, what are the basic individual rights–rights that arise in the course of people interacting with AI technologies–that should be recognized? Answering that question will be key to ensuring that enacted laws and promulgated regulations achieve one of Congress’s recently stated goals: ensuring AI technologies benefit society. Answering that question now will be key to ensuring that policymakers have the necessary foundation in front of them and will not be unduly swayed by influential stakeholders as they take up the task of deciding how and/or when to regulate AI technologies.

Identify individual rights leads to their recognition, which leads to basic legal protections, whether in the form of legislation or regulation, or, initially, as common law from judges deciding if and how to remedy a harm to a person or property caused by an AI system. Fortunately, identifying individual rights is not a formidable task. The belief that people have a right to be let alone in their private lives, for example, established the basic premise for privacy laws in the US. Those same concerns about intrusion into personal lives ought to be among the first considerations by those tasked with formulating and developing AI legislation and regulations. The notion that people have a right to be let alone has led to the identification of other individual rights that could protect people in their interactions with AI systems. These include the right of transparency and explanation, the right of audit (with the objective to reveal bias, discrimination, and content filtering, and thus maintain accountability), the right to know when you are dealing with an AI system and not a human, and the right to be forgotten (that is, mandatory deletion of one’s personal data), among others.

Addressing individual rights, however, may not persuade everyone to trust AI systems, especially when AI creators cannot explain precisely the basis for certain actions taken by trained AI technologies. People want to trust that owners and developers of AI systems that use private personal data will employ the best safeguards to protect that data. Trust, but verify, may need to play a role in policy-making efforts even if policies appear to comprehensively address individual rights. Trust might be addressed by imposing specific reporting and disclosure requirements, such as those suggested by federal lawmakers in pending federal autonomous driving legislation.

In the end, however, laws and regulations developed with privacy and other individual rights in mind, that address data security and other concerns people have about trusting their data to AI companies, will invariably include gaps, omissions, and incomplete definitions. The result may be unregulated commercial AI systems, and AI businesses finding workarounds. In such instances, people may have limited options other than to fully opt out, or accept that individual AI technology developers’ work was motivated by ethical considerations and a desire to make something that benefits society. The pressure within many tech companies and startups to push new products out to the world every day, however, could make prioritizing ethical considerations a challenge. Many organizations focused on AI technologies, some of which are listed below, are working to make sure that doesn’t happen.

Rights, trust, and ethical considerations in commercial endeavors can get overshadowed by financial interests and the subjective interests and tastes of individuals. It doesn’t help that companies and policymakers may also feel that winning the race for AI dominance is a factor to be considered (which is not to say that such a consideration is antithetical to protecting individual rights). This underscores the need for thoughtful analysis, sooner rather than later, of the need for laws and regulations directed toward AI technologies.

To learn more about some of these issues, visit the websites of the following organizations, who are active in AI policy research: Access Now, AI Now, and Future of Life.