Senate-Passed Defense Authorization Bill Funds Artificial Intelligence Programs

The Senate-passed national defense appropriations bill (H.R.5515, as amended), to be known as the John S. McCain National Defense Authorization Act for Fiscal Year 2019, includes spending provisions for several artificial intelligence technology programs.

Passed by a vote of 85-10 on June 18, 2018, the bill would include appropriations for the Department of Defense “to coordinate the efforts of the Department to develop, mature, and transition artificial intelligence technologies into operational use.” A designated Coordinator will serve to oversee joint activities of the services in the development of a Strategic Plan for AI-related research and development.  The Coordinator will also facilitate the acceleration of development and fielding of AI technologies across the services.  Notably, the Coordinator is to develop appropriate ethical, legal, and other policies governing the development and use of AI-enabled systems in operational situations. Within one year of enactment, the Coordinator is to complete a study on the future of AI in the context of DOD missions, including recommendations for integrating “the strengths and reliability of artificial intelligence and machine learning with the inductive reasoning power of a human.”

In other provisions, the Director of the Defense Intelligence Agency (DIA; based in Ft. Meade, MD) is tasked with submitting a report to Congress within 90 days of enactment that directly compares the capabilities of the US in emerging technologies (including AI) and the capabilities of US adversaries in those technologies.

The bill would require the Under Secretary for R&D to pilot the use of machine-vision technologies to automate certain human weapons systems manufacturing tasks. Specifically, tests would be conducted to assess whether computer vision technology is effective and at a level of readiness to perform the function of determining the authenticity of microelectronic parts at the time of creation through final insertion into weapon systems.

The Senate version of the 2019 appropriations bill replaces an earlier House version (passed 351-66 on May 24, 2018).

Autonomous Vehicles Get a Pass on Federal Statutory Liability, At Least for Now

Consumers may accept “good enough” when it comes to the performance of certain artificial intelligence systems, such as AI-powered Internet search results. But in the case of autonomous vehicles, a recent article in The Economist argues that those same consumers will more likely favor AI-infused vehicles demonstrating the “best” safety record.

If that holds true, a recent Congressional bill directed at autonomous vehicles–the so-called “Safely Ensuring Lives Future Deployment and Research in Vehicle Evolution Act,” or the SELF DRIVE Act (H.R. 3388)–should be well received by safety-conscious consumers. If signed into law, however, H.R. 3388 will require those same consumers to turn to the courts to determine liability and the magnitude of possible damages from vehicle crash events. That’s because the bill as currently written takes a pass on providing a statutory scheme for allocating crash-related liability.

H.R. 3388 passed the House by vote in early September 2017 (a similar bill is working its way in the Senate). Like several earlier proposals made public by the House Energy and Commerce Committee in connection with hearings in June 2017, the resolution is one of the first federal attempts at closely regulating AI systems embodied in a major consumer product (at the state level, at least twenty states have enacted laws regarding some aspect of self-driving vehicles). The stated purpose of the SELF DRIVE Act is to memorialize the Federal role in ensuring the safety of highly automated vehicles as it relates to design, construction, and performance, by encouraging the testing and deployment of such vehicles.

Section 8 of the bill is notable in that it would require future rulemaking to require manufacturers to inform consumers of the capabilities and limitations of a vehicle’s “driving automation system.” The bill would define “automated driving system” as “the hardware and software that are collectively capable of performing the entire dynamic driving task on a sustained basis, regardless of whether such system is limited to a specific operational design domain.” The bill would define “dynamic driving task” as “the real time operational and tactical functions required to operate a vehicle in on-road traffic,” including monitoring the driving environment via object and event detection, recognition, classification, and response preparation and object and event response execution.

Requiring manufacturers to inform consumers of the “capabilities and limitations” of a vehicle’s “driving automation system,” combined with published safety statistics, might steer educated consumers toward a particular make and model, much like other vehicle features like lane departure warning and automatic braking features do. In the case of liability for crashes, however, H.R. 3388 would amend existing federal laws to clarify that “compliance with a motor vehicle safety standard…does not exempt a person from liability at common law” and common law claims are not preempted.

In other words, vehicle manufacturers who meets all of H.R. 3388’s express standards (and future regulatory standards, which the bill mandates be written by the Department of Transportation and other federal agencies) could still be subject to common law causes of action, just as they are today.

Common law refers to the body of law developed over time by judges in the course of applying, to a set of facts and circumstances, relevant legal principles developed in previous court decisions (i.e., precedential decisions). Common law liability considers which party should be held responsible (and thus should pay damages) to another party who alleges some harm. Judicial common law decisions are thus generally viewed as being limited to a case’s specific facts and circumstances. Testifying before the House Committee on June 27, 2017, George Washington University Law School’s Alan Morrison described one of the criticisms lodged against relying solely on common law approaches to regulating autonomous vehicles and assessing liability: common law develops slowly over time.

“Traditionally, auto accidents and product liability rules have been matters of state law, generally developed by state courts, on a case by case basis,” Morrison said in prepared remarks for the record during testimony back in June. “Some scholars and others have suggested that [highly autonomous vehicles, HAVs] may be an area, like nuclear power was in the 1950s, in which liability laws, which form the basis for setting insurance premiums, require a uniform national liability answer, especially because HAVs, once they are deployed, will not stay within state boundaries. They argue that, in contrast to common law development, which can progress very slowly and depends on which cases reach the state’s highest court (and when), legislation can be acted on relatively quickly and comprehensively, without having to wait for the ‘right case’ to establish the [common] law.”

For those hoping Congress would use H.R. 3388 as an opportunity to issue targeted statutory schemes containing specific requirements covering the performance and standards for AI-infused autonomous vehicles, which might provide guidance for AI developers in many other industries, the resolution may be viewed as disappointing. H.R. 3388 leaves unanswered questions about who should be liable in cases where complex hardware-software systems contribute to injury or simply fail to work as advertised. Autonomous vehicles rely on sensors for “monitoring the driving environment via object and event detection” and software trained to identify objects from that data (i.e., “object and event…recognition, classification, and response preparation”). Should a sensor manufacturer be held liable if, for example, its sensor sampling rate is too slow and its field of vision too narrow, or the software provider who trained its computer vision algorithm on data from 50,000 vehicle miles traveled instead of 100,000, or the vehicle manufacturer who installed those hardware and software components? What if a manufacturer decides not to inform consumers of those limitations in its statement of “capabilities and limitations” of its “driving automation systems”? Should a federal law even attempt to set such detailed, one size fits all standards? As things stand now, answers to these questions may become apparent only after courts consider them in the course of deciding liability in common law injury and product liability cases.

The Economist authors predict that companies whose AI is behind the fewest autonomous vehicle crashes “will enjoy outsize benefits.” Quantifying those benefits, however, may need to wait until after potential liability issues in AI-related cases become clearer over time.