Congress Takes Aim at the FUTURE of Artificial Intelligence

As the calendar turns over to 2018, artificial intelligence system developers will need to keep an eye on first of its kind legislation being considered in Congress. The “Fundamentally Understanding The Usability and Realistic Evolution of Artificial Intelligence Act of 2017,” or FUTURE of AI Act, is Congress’s first major step toward comprehensive regulation of the AI tech sector.

Introduced on December 22, 2017, companion bills S.2217 and H.R.4625 touch on a host of AI issues, their stated purposes mirroring concerns raised by many about possible problems facing society as AI technologies becomes ubiquitous. The bills propose to establish a federal advisory committee charged with reporting to the Secretary of Commerce on many of today’s hot button, industry-disrupting AI issues.

Definitions

Leaving the definition of “artificial intelligence” open for later modification, both bills take a broad brush at defining, inclusively, what an AI system is, what artificial general intelligence (AGI) means, and what are “narrow” AI systems, which presumably would each be treated differently under future laws and implementing regulations.

Under both measures, AI is generally defined as “artificial systems that perform tasks under varying and unpredictable circumstances, without significant human oversight, or that can learn from their experience and improve their performance,” and encompass systems that “solve tasks requiring human-like perception, cognition, planning, learning, communication, or physical action.” According to the bills’ sponsors, the more “human-like the system within the context of its tasks, the more it can be said to use artificial intelligence.”

While those definitions and descriptions include plenty of ambiguity, characteristic of early legislative efforts, the bills also provide several clarifying examples: AI involves technologies that think like humans, such as cognitive architectures and neural networks; those that act like humans, such as systems that can pass the Turing test or other comparable test via natural language processing, knowledge representation, automated reasoning, and learning; those using sets of techniques, including machine learning, that seek to approximate some cognitive task; and AI technologies that act rationally, such as intelligent software agents and embodied robots that achieve goals via perception, planning, reasoning, learning, communicating, decision making, and acting.

The bills describe AGI as “a notional future AI system exhibiting apparently intelligent behavior at least as advanced as a person across the range of cognitive, emotional, and social behaviors,” which is generally consistent with how many others view the concept of an AGI system.

So-called narrow AI is viewed as an AI system that addresses specific application areas such as playing strategic games, language translation, self-driving vehicles, and image recognition. Plenty of other AI technologies today employ what the sponsors define as narrow AI.

The FUTURE of AI Committee

Both the House and Senate versions would establish a FUTURE of AI advisory committee made up of government and private-sector members tasked with evaluating and reporting on AI issues.

The bills emphasize that the committee should consider accountability and legal rights issues, including identifying where responsibility lies for violations of laws by an AI system, and assessing the compatibility of international regulations involving privacy rights of individuals who are or will be affected by technological innovation relating to AI. The committee will evaluate whether advancements in AI technologies have or will outpace the legal and regulatory regimes implemented to protect consumers, and how existing laws, including those concerning data access and privacy (as discussed here), should be modernized to enable the potential of AI.

The committee will study workforce impacts, including whether and how networked, automated, AI applications and robotic devices will displace or create jobs and how any job-related gains from AI can be maximized. The committee will also evaluate the role ethical issues should take in AI development, including whether and how to incorporate ethical standards in the development and implementation of AI, as suggested by groups such as IEEE’s Global Initiative on Ethics of Autonomous and Intelligent Systems.

The committee will consider issues of machine learning bias through core cultural and societal norms, including how bias can be identified and eliminated in the development of AI and in the algorithms that support AI technologies. The committee will focus on evaluating the selection and processing of data used to train AI, diversity in the development of AI, the ways and places the systems are deployed and the potential harmful outcomes, and how ongoing dialogues and consultations with multi-stakeholder groups can maximize the potential of AI and further development of AI technologies that can benefit everyone inclusively.

The FUTURE of AI committee will also consider issues of competitiveness of the United States, such as how to create a climate for public and private sector investment and innovation in AI, and the possible benefits and effects that the development of AI may have on the economy, workforce, and competitiveness of the United States. The committee will be charged with reviewing AI-related education; open sharing of data and the open sharing of research on AI; international cooperation and competitiveness; opportunities for AI in rural communities (that is, how the Federal Government can encourage technological progress in implementation of AI that benefits the full spectrum of social and economic classes); and government efficiency (that is, how the Federal Government utilizes AI to handle large or complex data sets, how the development of AI can affect cost savings and streamline operations in various areas of government operations, including health care, cybersecurity, infrastructure, and disaster recovery).

Non-profits like AI Now and Future of Life, among others, are also considering many of the same issues. And while those groups primarily rely on private funding, the FUTURE of AI advisory committee will be funded through Congressional appropriations or through contributions “otherwise made available to the Secretary of Commerce,” which may include donation from private persons and non-federal entities that have a stake in AI technology development. The bills limit private donations to less than or equal to 50% of the committee’s total funding from all sources.

The bills’ sponsors says that AI’s evolution can greatly benefit society by powering the information economy, fostering better informed decisions, and helping unlock answers to questions that are presently unanswerable. Their sentiment that fostering the development of AI should be done in a way that maximizes AI’s benefit to society provides a worthy goal for the FUTURE of AI advisory committee’s work. But it also suggests how AI companies may wish to approach AI technology development efforts, especially in the interim period before future legislation becomes law.

Do Artificial Intelligence Technologies Need Regulating?

At some point, yes. But when? And how?

Today, AI is largely unregulated by federal and state governments. That may change as technologies incorporating AI continue to expand into communications, education, healthcare, law, law enforcement, manufacturing, transportation, and other industries, and prominent scientists as well as lawmakers continue raising concerns about unchecked AI.

The only Congressional proposals directly aimed at AI technologies so far have been limited to regulating Highly Autonomous Vehicles (HAVs, or self-driving cars). In developing those proposals, the House Energy and Commerce Committee brought stakeholders to the table in June 2017 to offer their input. In other areas of AI development, however, technologies are reportedly being developed without the input of those whose knowledge and experience might provide acceptable and appropriate direction.

Tim Hwang, an early adopter of AI technology in the legal industry, says individual artificial intelligence researchers are “basically writing policy in code” that reflects personal perspectives or biases. Kate Darling, the co-founder of AI Now and an intellectual property attorney, speaking with Wired magazine, assessed the problem this way: “Who gets a seat at the table in the design of these systems? At the moment, it’s driven by engineering and computer science experts who are designing systems that touch everything from criminal justice to healthcare to education. But in the same way that we wouldn’t expect a federal judge to optimize a neural network, we shouldn’t be expecting an engineer to understand the workings of the criminal justice system.”

“Who gets a seat at the table in the design of these systems? At the moment, it’s driven by engineering and computer science experts who are designing systems that touch everything from criminal justice to healthcare to education. But in the same way that we wouldn’t expect a federal judge to optimize a neural network, we shouldn’t be expecting an engineer to understand the workings of the criminal justice system.”

Those concerns frame part of the debate over regulating the AI industry, but timing is another big question. Shivon Zilis, fund investor at Bloomberg Beta, cautions that AI technology is here and will become a very powerful technology, so the public discussion of regulation needs to happen now. Others, like Alphabet chairman Eric Schmidt, considers the government regulation debate premature.

A fundamental challenge for Congress and government regulators is how to regulate AI. As AI technologies advance from the simple to the super-intelligent, a one size fits all regulatory approach could cause more problems than it addresses. On the one end of the AI technology spectrum, simple AI systems may need little regulatory oversight. But on the other end of the spectrum, super-intelligent autonomous systems may be viewed as having rights, and thus a focused set of regulations may be more appropriate. The Information Technology Industry Council (ITI), a lobbying group, “encourage[s] governments to evaluate existing policy tools and use caution before adopting new laws, regulations, or taxes that may inadvertently or unnecessarily impede the responsible development and use of AI.”

Regulating the AI industry will require careful thought and planning. Government regulations are hard to get right, and they rarely please everyone. Regulate too much and economic activity can be stifled. Regulate too little (or not at all) and the consequences could be worse. Congress and regulators will also need to assess the impacts of AI-specific regulations on an affected industry years and decades down the road, a difficult task when market trends and societal acceptance of AI will likely alter the trajectory of the AI industry in possibly unforeseen ways.

But we may be getting ahead of ourselves. Kate Darling recently noted that stakeholders have not yet agreed on basic definitions for AI. For example, there is not even a universally-accepted definition today for what is a “robot.”

Sources:
June 2017 House Energy and Commerce Committee, Hearings on Self-Driving Cars

Wired Magazine: Why AI is Still Waiting for its Ethics Transplant

TechCrunch

Futurism

Gizmodo