Autonomous Vehicles Get a Pass on Federal Statutory Liability, At Least for Now

Consumers may accept “good enough” when it comes to the performance of certain artificial intelligence systems, such as AI-powered Internet search results. But in the case of autonomous vehicles, a recent article in The Economist argues that those same consumers will more likely favor AI-infused vehicles demonstrating the “best” safety record.

If that holds true, a recent Congressional bill directed at autonomous vehicles–the so-called “Safely Ensuring Lives Future Deployment and Research in Vehicle Evolution Act,” or the SELF DRIVE Act (H.R. 3388)–should be well received by safety-conscious consumers. If signed into law, however, H.R. 3388 will require those same consumers to turn to the courts to determine liability and the magnitude of possible damages from vehicle crash events. That’s because the bill as currently written takes a pass on providing a statutory scheme for allocating crash-related liability.

H.R. 3388 passed the House by vote in early September 2017 (a similar bill is working its way in the Senate). Like several earlier proposals made public by the House Energy and Commerce Committee in connection with hearings in June 2017, the resolution is one of the first federal attempts at closely regulating AI systems embodied in a major consumer product (at the state level, at least twenty states have enacted laws regarding some aspect of self-driving vehicles). The stated purpose of the SELF DRIVE Act is to memorialize the Federal role in ensuring the safety of highly automated vehicles as it relates to design, construction, and performance, by encouraging the testing and deployment of such vehicles.

Section 8 of the bill is notable in that it would require future rulemaking to require manufacturers to inform consumers of the capabilities and limitations of a vehicle’s “driving automation system.” The bill would define “automated driving system” as “the hardware and software that are collectively capable of performing the entire dynamic driving task on a sustained basis, regardless of whether such system is limited to a specific operational design domain.” The bill would define “dynamic driving task” as “the real time operational and tactical functions required to operate a vehicle in on-road traffic,” including monitoring the driving environment via object and event detection, recognition, classification, and response preparation and object and event response execution.

Requiring manufacturers to inform consumers of the “capabilities and limitations” of a vehicle’s “driving automation system,” combined with published safety statistics, might steer educated consumers toward a particular make and model, much like other vehicle features like lane departure warning and automatic braking features do. In the case of liability for crashes, however, H.R. 3388 would amend existing federal laws to clarify that “compliance with a motor vehicle safety standard…does not exempt a person from liability at common law” and common law claims are not preempted.

In other words, vehicle manufacturers who meets all of H.R. 3388’s express standards (and future regulatory standards, which the bill mandates be written by the Department of Transportation and other federal agencies) could still be subject to common law causes of action, just as they are today.

Common law refers to the body of law developed over time by judges in the course of applying, to a set of facts and circumstances, relevant legal principles developed in previous court decisions (i.e., precedential decisions). Common law liability considers which party should be held responsible (and thus should pay damages) to another party who alleges some harm. Judicial common law decisions are thus generally viewed as being limited to a case’s specific facts and circumstances. Testifying before the House Committee on June 27, 2017, George Washington University Law School’s Alan Morrison described one of the criticisms lodged against relying solely on common law approaches to regulating autonomous vehicles and assessing liability: common law develops slowly over time.

“Traditionally, auto accidents and product liability rules have been matters of state law, generally developed by state courts, on a case by case basis,” Morrison said in prepared remarks for the record during testimony back in June. “Some scholars and others have suggested that [highly autonomous vehicles, HAVs] may be an area, like nuclear power was in the 1950s, in which liability laws, which form the basis for setting insurance premiums, require a uniform national liability answer, especially because HAVs, once they are deployed, will not stay within state boundaries. They argue that, in contrast to common law development, which can progress very slowly and depends on which cases reach the state’s highest court (and when), legislation can be acted on relatively quickly and comprehensively, without having to wait for the ‘right case’ to establish the [common] law.”

For those hoping Congress would use H.R. 3388 as an opportunity to issue targeted statutory schemes containing specific requirements covering the performance and standards for AI-infused autonomous vehicles, which might provide guidance for AI developers in many other industries, the resolution may be viewed as disappointing. H.R. 3388 leaves unanswered questions about who should be liable in cases where complex hardware-software systems contribute to injury or simply fail to work as advertised. Autonomous vehicles rely on sensors for “monitoring the driving environment via object and event detection” and software trained to identify objects from that data (i.e., “object and event…recognition, classification, and response preparation”). Should a sensor manufacturer be held liable if, for example, its sensor sampling rate is too slow and its field of vision too narrow, or the software provider who trained its computer vision algorithm on data from 50,000 vehicle miles traveled instead of 100,000, or the vehicle manufacturer who installed those hardware and software components? What if a manufacturer decides not to inform consumers of those limitations in its statement of “capabilities and limitations” of its “driving automation systems”? Should a federal law even attempt to set such detailed, one size fits all standards? As things stand now, answers to these questions may become apparent only after courts consider them in the course of deciding liability in common law injury and product liability cases.

The Economist authors predict that companies whose AI is behind the fewest autonomous vehicle crashes “will enjoy outsize benefits.” Quantifying those benefits, however, may need to wait until after potential liability issues in AI-related cases become clearer over time.

Do Artificial Intelligence Technologies Need Regulating?

At some point, yes. But when? And how?

Today, AI is largely unregulated by federal and state governments. That may change as technologies incorporating AI continue to expand into communications, education, healthcare, law, law enforcement, manufacturing, transportation, and other industries, and prominent scientists as well as lawmakers continue raising concerns about unchecked AI.

The only Congressional proposals directly aimed at AI technologies so far have been limited to regulating Highly Autonomous Vehicles (HAVs, or self-driving cars). In developing those proposals, the House Energy and Commerce Committee brought stakeholders to the table in June 2017 to offer their input. In other areas of AI development, however, technologies are reportedly being developed without the input of those whose knowledge and experience might provide acceptable and appropriate direction.

Tim Hwang, an early adopter of AI technology in the legal industry, says individual artificial intelligence researchers are “basically writing policy in code” that reflects personal perspectives or biases. Kate Darling, the co-founder of AI Now and an intellectual property attorney, speaking with Wired magazine, assessed the problem this way: “Who gets a seat at the table in the design of these systems? At the moment, it’s driven by engineering and computer science experts who are designing systems that touch everything from criminal justice to healthcare to education. But in the same way that we wouldn’t expect a federal judge to optimize a neural network, we shouldn’t be expecting an engineer to understand the workings of the criminal justice system.”

“Who gets a seat at the table in the design of these systems? At the moment, it’s driven by engineering and computer science experts who are designing systems that touch everything from criminal justice to healthcare to education. But in the same way that we wouldn’t expect a federal judge to optimize a neural network, we shouldn’t be expecting an engineer to understand the workings of the criminal justice system.”

Those concerns frame part of the debate over regulating the AI industry, but timing is another big question. Shivon Zilis, fund investor at Bloomberg Beta, cautions that AI technology is here and will become a very powerful technology, so the public discussion of regulation needs to happen now. Others, like Alphabet chairman Eric Schmidt, considers the government regulation debate premature.

A fundamental challenge for Congress and government regulators is how to regulate AI. As AI technologies advance from the simple to the super-intelligent, a one size fits all regulatory approach could cause more problems than it addresses. On the one end of the AI technology spectrum, simple AI systems may need little regulatory oversight. But on the other end of the spectrum, super-intelligent autonomous systems may be viewed as having rights, and thus a focused set of regulations may be more appropriate. The Information Technology Industry Council (ITI), a lobbying group, “encourage[s] governments to evaluate existing policy tools and use caution before adopting new laws, regulations, or taxes that may inadvertently or unnecessarily impede the responsible development and use of AI.”

Regulating the AI industry will require careful thought and planning. Government regulations are hard to get right, and they rarely please everyone. Regulate too much and economic activity can be stifled. Regulate too little (or not at all) and the consequences could be worse. Congress and regulators will also need to assess the impacts of AI-specific regulations on an affected industry years and decades down the road, a difficult task when market trends and societal acceptance of AI will likely alter the trajectory of the AI industry in possibly unforeseen ways.

But we may be getting ahead of ourselves. Kate Darling recently noted that stakeholders have not yet agreed on basic definitions for AI. For example, there is not even a universally-accepted definition today for what is a “robot.”

Sources:
June 2017 House Energy and Commerce Committee, Hearings on Self-Driving Cars

Wired Magazine: Why AI is Still Waiting for its Ethics Transplant

TechCrunch

Futurism

Gizmodo