Autonomous Vehicles Get a Pass on Federal Statutory Liability, At Least for Now

Consumers may accept “good enough” when it comes to the performance of certain artificial intelligence systems, such as AI-powered Internet search results. But in the case of autonomous vehicles, a recent article in The Economist argues that those same consumers will more likely favor AI-infused vehicles demonstrating the “best” safety record.

If that holds true, a recent Congressional bill directed at autonomous vehicles–the so-called “Safely Ensuring Lives Future Deployment and Research in Vehicle Evolution Act,” or the SELF DRIVE Act (H.R. 3388)–should be well received by safety-conscious consumers. If signed into law, however, H.R. 3388 will require those same consumers to turn to the courts to determine liability and the magnitude of possible damages from vehicle crash events. That’s because the bill as currently written takes a pass on providing a statutory scheme for allocating crash-related liability.

H.R. 3388 passed the House by vote in early September 2017 (a similar bill is working its way in the Senate). Like several earlier proposals made public by the House Energy and Commerce Committee in connection with hearings in June 2017, the resolution is one of the first federal attempts at closely regulating AI systems embodied in a major consumer product (at the state level, at least twenty states have enacted laws regarding some aspect of self-driving vehicles). The stated purpose of the SELF DRIVE Act is to memorialize the Federal role in ensuring the safety of highly automated vehicles as it relates to design, construction, and performance, by encouraging the testing and deployment of such vehicles.

Section 8 of the bill is notable in that it would require future rulemaking to require manufacturers to inform consumers of the capabilities and limitations of a vehicle’s “driving automation system.” The bill would define “automated driving system” as “the hardware and software that are collectively capable of performing the entire dynamic driving task on a sustained basis, regardless of whether such system is limited to a specific operational design domain.” The bill would define “dynamic driving task” as “the real time operational and tactical functions required to operate a vehicle in on-road traffic,” including monitoring the driving environment via object and event detection, recognition, classification, and response preparation and object and event response execution.

Requiring manufacturers to inform consumers of the “capabilities and limitations” of a vehicle’s “driving automation system,” combined with published safety statistics, might steer educated consumers toward a particular make and model, much like other vehicle features like lane departure warning and automatic braking features do. In the case of liability for crashes, however, H.R. 3388 would amend existing federal laws to clarify that “compliance with a motor vehicle safety standard…does not exempt a person from liability at common law” and common law claims are not preempted.

In other words, vehicle manufacturers who meets all of H.R. 3388’s express standards (and future regulatory standards, which the bill mandates be written by the Department of Transportation and other federal agencies) could still be subject to common law causes of action, just as they are today.

Common law refers to the body of law developed over time by judges in the course of applying, to a set of facts and circumstances, relevant legal principles developed in previous court decisions (i.e., precedential decisions). Common law liability considers which party should be held responsible (and thus should pay damages) to another party who alleges some harm. Judicial common law decisions are thus generally viewed as being limited to a case’s specific facts and circumstances. Testifying before the House Committee on June 27, 2017, George Washington University Law School’s Alan Morrison described one of the criticisms lodged against relying solely on common law approaches to regulating autonomous vehicles and assessing liability: common law develops slowly over time.

“Traditionally, auto accidents and product liability rules have been matters of state law, generally developed by state courts, on a case by case basis,” Morrison said in prepared remarks for the record during testimony back in June. “Some scholars and others have suggested that [highly autonomous vehicles, HAVs] may be an area, like nuclear power was in the 1950s, in which liability laws, which form the basis for setting insurance premiums, require a uniform national liability answer, especially because HAVs, once they are deployed, will not stay within state boundaries. They argue that, in contrast to common law development, which can progress very slowly and depends on which cases reach the state’s highest court (and when), legislation can be acted on relatively quickly and comprehensively, without having to wait for the ‘right case’ to establish the [common] law.”

For those hoping Congress would use H.R. 3388 as an opportunity to issue targeted statutory schemes containing specific requirements covering the performance and standards for AI-infused autonomous vehicles, which might provide guidance for AI developers in many other industries, the resolution may be viewed as disappointing. H.R. 3388 leaves unanswered questions about who should be liable in cases where complex hardware-software systems contribute to injury or simply fail to work as advertised. Autonomous vehicles rely on sensors for “monitoring the driving environment via object and event detection” and software trained to identify objects from that data (i.e., “object and event…recognition, classification, and response preparation”). Should a sensor manufacturer be held liable if, for example, its sensor sampling rate is too slow and its field of vision too narrow, or the software provider who trained its computer vision algorithm on data from 50,000 vehicle miles traveled instead of 100,000, or the vehicle manufacturer who installed those hardware and software components? What if a manufacturer decides not to inform consumers of those limitations in its statement of “capabilities and limitations” of its “driving automation systems”? Should a federal law even attempt to set such detailed, one size fits all standards? As things stand now, answers to these questions may become apparent only after courts consider them in the course of deciding liability in common law injury and product liability cases.

The Economist authors predict that companies whose AI is behind the fewest autonomous vehicle crashes “will enjoy outsize benefits.” Quantifying those benefits, however, may need to wait until after potential liability issues in AI-related cases become clearer over time.

Inaugural Post – AI Tech and the Law

Welcome. I am excited to present the first of what I hope will be many useful and timely posts covering issues arising at the crossroads of artificial intelligence technology and the law. My goal with this blog is to provide insightful discussion concerning the legal issues expected to affect individuals and businesses as they develop and interact with AI products and services. I also hope to engage with AI thought leaders in the legal industry as new AI technology-specific issues emerge. Join me by sharing your thoughts about AI and the law. If you’d like to see a particular issue discussed on these pages, I invite you to send me an email.

Much has already been written about the promises of AI and its ever-increasing role in daily life. AI technologies are unquestionably making their presence known in many impactful ways. Three billion smartphones in use worldwide, and many of them use one form of AI or another. Voice assistants driven by AI are appearing on kitchen countertops everywhere. Online search engines, powered by AI, deliver your search results. Select like/love/dislike/thumbs-down on your music streaming or news aggregating apps empowers AI algorithms to make recommendations for you.

Today’s tremendous AI industry expansion, driven by big data and enhanced computational power, will continue at an unprecedented rate in the future. We are seeing investors fund AI-focused startups across the globe. As Marc Cuban predicted earlier this year, the world’s first trillionaire will be an AI entrepreneur.

Not everyone, however, shares the same positive outlook concerning AI. Elon Musk, Bill Gates, Stephen Hawking and others have raised concerns. Many foresee problems arising as AI becomes ubiquitous, especially if businesses are left to develop AI systems without guidance. The media have written about displaced employees due to autonomous systems; bias, social justice, and civil rights concerns in big data; AI consumer product liability; privacy and data security; superintelligent systems, and other issues. Some have even predicted dire consequences from unchecked AI.

But with all the talk about AI–both positive and negative–businesses are operating in a vacuum of laws, regulations, and court opinions dealing directly with AI. Indeed, with only a few exceptions, most businesses today have little in the way of legal guidance about acceptable practices when it comes to developing and deploying their AI systems. While some advocate for a common law approach to dealing with AI problems on a case-by-case basis, others would like to see a more structured regulatory framework.

I look forward to considering these and others issues in the months to come.

Brian Higgins