Contact 202.772.5800
News and Analysis of Artificial Intelligence Technology Legal Issues

New York City Task Force to Consider Algorithmic Harm

One might hear discussions about backpropagation, activation functions, and gradient descent when visiting an artificial intelligence company. But more recently, terms like bias and harm associated with AI models and products have entered tech’s vernacular. These issues also have the attention of many outside of the tech world following reports of AI systems performing better for some users than for others when making life-altering decisions about prison sentences, creditworthiness, and job hiring, among others.

Considering the recent number of accepted conference papers about algorithmic bias, AI technologists, ethicists, and lawyers seems to be proactively addressing the issue by sharing with each other various technical and other solutions. At the same time, at least one legislative body–the New York City Council–has decided to explore ways to regulate AI technology with an unstated goal of rooting out bias (or at least revealing its presence) by making AI systems more transparent.

New York City’s passage of the “Automated decision systems used by agencies” law (NYC Local Law No. 49 of 2018, effective January 11, 2018), creates a task force under the aegis of Mayor de Blasio’s office. The task force will convene no later than early May 2018 for the purpose of identifying automated decision systems used by New York City government agencies, developing procedures for identifying and remedying harm, developing a process for public review, and assessing the feasibility of archiving automated decision systems and relevant data.

The law defines an “automated decision system” as:

“a computerized implementations of algorithms, including those derived from machine learning or other data processing or artificial intelligence techniques, which are used to make or assist in making decisions.”

The law defines an “agency automated decision system” as:

“an automated decision system used by an agency to make or assist in making decisions concerning rules, policies or actions implemented that impact the public.”

While the law does not specifically call out bias, the source of algorithmic unfairness and harm can be traced in large part to biases in the data used to train algorithmic models. Data can be inherently biased when it reflects the implicit values of a limited number of people involved in its collection and labelling, or when the data chosen for a project does not represent a full cross-section of society (which is partly the result of copyright and other restrictions on access to proprietary data sets, and the ease of access to older or limited data sets where groups of people may be unrepresented or underrepresented). A machine algorithm trained on this data will “learn” the biases, and can perpetuate bias when it is asked to make decisions.

Some argue that making algorithmic black boxes more transparent is key to understanding whether an algorithm is perpetuating bias. The New York City task force could recommend that software companies that provide automated decision systems to New York City agencies make their systems transparent by disclosing details about their models (including source code) and producing the data used to create their models.

Several stakeholders have already expressed concerns about disclosing algorithms and data to regulators. What local agency, for example, would have the resources to evaluate complex AI software systems? And how will source code and data, which may embody trade secrets and include personal information, be safeguarded from inadvertent public disclosure? And what recourse will model developers have before agencies turn over algorithms (and the underlying source code and data) in response to Freedom of Information requests and court-issued subpoenas?

Others have expressed concerns that regulating at the local level may lead to disparate and varying standards and requirements, placing a huge burden on companies. For example, New York City may impose standards different from those imposed by other local governments. Already, companies are having to deal with different state regulations governing AI-infused autonomous vehicles, and will soon have to contend with European Union regulations concerning algorithmic data (GDPR Art. 22; effective May 2018) that may be different than those imposed locally.

Before their job is done, New York City’s task force will likely hear from many stakeholders, each with their own special interests. In the end, the task force’s recommendations, especially those on how to remedy harm, will receive careful scrutiny, and not just by local stakeholders, but also by policymakers far removed from New York City, because as AI technology impacts on society grow, the pressure to regulate AI systems on a national basis is likely to grow.

Information and/or references used for this post came from the following:

NYC Local Law No. 49 of 2018 (available at here) and various hearing transcripts

Letter to Mayor Bill de Blasio, Jan. 22, 2018, from AI Now and others (available here)

EU General Data Protection Regulations (GDPR), Art. 22 (“Automated Individual Decision-Making, Including Profiling”), effective May 2018.

Dixon et. al “Measuring and Mitigating Unintended Bias in Text Classification”; AAAI 2018 (accepted paper).

W. Wallach and G. Marchant, “An Agile Ethical/Legal Model for the International and National Governance of AI and Robotics”; AAAI 2018 (accepted paper).

D. Tobey, “Software Malpractice in the Age of AI: A Guide for the Wary Tech Company”; AAAI 2018 (accepted paper).

Leave a Reply

Your email address will not be published.