Regulating Artificial Intelligence Technologies by Consensus

As artificial intelligence technologies continue to transform industries, several prominent voices in the technology community are calling for regulating AI to get ahead of what they see as AI’s actual and potential social and economic impacts. These calls for action follow reports of machine learning classification bias, instances of open source AI tools being misused, lack of transparency in AI algorithms, privacy and data security issues, and forecasts of workforce impacts as AI technologies spread.

Those advocating for strong state or federal legislative action around AI, however, may be disappointed by the rate at which policymakers in the US are tackling sensitive issues. But they may be even more disappointed by recent legislative efforts suggesting that AI technologies will not be regulated in the traditional sense, but instead may be governed through a process of consensus building without targeted and enforceable standards. This form of technological governance–often called “soft law”–is not new. In some industries, soft law governance has evolved and taken over the more traditional command and control “hard law” governance approach.

Certain transformative technologies like AI evolve faster than policymaker’s ability to keep up and as a result, at least in the US, AI’s future may not be tied to traditional legislative lawmaking, notice and rulemaking, and regulation by multiple government agencies whose missions include overseeing specific industry activities. According to those who have studied this trend, the hard law approach is gradually dying when it comes to certain tech, with the exception of technologies in highly-regulated segments such as autonomous vehicles (e.g., safety regulations) and fintech (e.g., regulatory oversight of distributed ledger tech and cryptocurrencies). Instead, an industry-led self-regulatory multistakeholder process is emerging whereby participants, including government policymakers, come up with consensus-based standards and processes that form a framework for regulating industry activities.

This process is already apparent when it comes to AI. Organizations like the IEEE have produced consensus-style standards for ethical considerations in the design and development of AI systems, and private companies are publishing their views on how they and others can self-regulate their activities, products, and services in the AI space. That is not to say that policymakers will play no role in the governance of AI. The US Congress and New York City, for example, are considering or in the process of implementing multistakeholder task forces for tackling the future of AI, workforce and education issues, and harms caused by machine learning algorithms.

A multistakeholder approach to regulating AI technologies is less likely to stifle innovation and competitiveness compared to a hard law prescriptive approach, which could involve numerous regulatory requirements, inflexible standards, and civil penalties for violations. But some view hard law governance as providing a measure of predictability that consensus approaches cannot duplicate. If multistakeholder governance is in AI’s future, stakeholders will need to develop and adopt meaningful standards and the industry will need to demonstrate a willingness to be held accountable in ways that go beyond simply appeasing vocal opponents and assuaging negative public sentiment toward AI. If they don’t, legislators may feel pressure to take a more hard law tact with AI technologies.