News and Analysis of Artificial Intelligence Technology Legal Issues
Europan Commision on Artificial Intelligence logo

Approaching Applicability of Europe’s Proposed AI Regulations as a Classification Task

The European Commission cast a wide regulatory net over artificial intelligence (AI) technologies and practices last month when it proposed new rules for AI on April 21, 2021 (link to PDF here). For those in the U.S. wondering whether the future rules might apply to them, the answer may be found following a regulatory applicability assessment (RAA), which is essentially a feature-driven classification problem, though a complex and legal one. That is, for a given AI system or practice, one must determine whether it is a “prohibited,” “high risk,” “limited risk,” or “minimal risk” system/practice based on the definitions of…

Ursula von der Leyen EU Commission President

Proposed New EU AI Regulations: A Pre-Planning Guide for U.S. In-House Counsel

If the European Commission’s newly proposed harmonized rules on Artificial Intelligence (the “Artificial Intelligence Act”) (published April 21, 2021) are adopted, U.S.-based AI companies operating in European Union (EU) countries (or expecting to do so) may soon be subject to significant new regulatory requirements. The proposed regulations (available PDF here), with few exceptions, would apply to companies or individuals (“providers”) who place on the market or put into service certain high-risk AI systems in the EU, “users” (including companies) of those AI systems who are located in the EU, and providers and users of such AI systems that are located…

JSL The Journal of Science and Law

Artificial Intelligence and Trust: Improving Transparency and Explainability Policies to Reverse Data Hyper-Localization Trends

In this peer-reviewed article (Journal of Science and Law; open source), my co-author and I discuss how access to data is an essential part of artificial intelligence (AI) technology development efforts. But government and corporate actors have increasingly imposed localized and hyper-localized restrictions on data due to rising mistrust—the fear and uncertainty about what countries and companies are doing with data, including perceived and real efforts to exploit user data or create more powerful and possibly dangerous AI systems that could threaten civil rights and national security. If the trend is not reversed, over-restriction could impede AI development to the…

Fed Trade Commission roundel logo

FTC Orders AI Company to Delete its Model Following Consumer Protection Law Violation

The nation’s consumer protection watchdog–the Federal Trade Commission (FTC)–took extraordinary law enforcement measures on January 11, 2021, after finding an artificial intelligence company had deceived customers about its data collection and use practices. In a first of its kind settlement involving facial recognition surveillance systems, the FTC ordered Everalbum, Inc., the now shuttered maker of the “Ever” photo album app and related website, to delete or destroy any machine learning and other models or algorithms developed in whole or in part using biometric information it unlawfully collected from users, along with the biometric data itself. In doing so, the agency…

Equality for minority

Eliminating Structural Bias in Data-Based Technologies: A Path Forward

As national protests over systemic and structural racism in America continue, community organizers, Black scholars, and others fighting injustice and unequal treatment are once again raising awareness of a long-standing problem lurking within artificial intelligence data-based technologies: bias. The campaign to root out bias–or eliminate biased systems altogether–has been amplified in recent weeks in the wake of reports of a black Michigan man who was apparently arrested solely on the misidentification decision of a facial recognition system used by law enforcement.  Criminal charges against the man were dropped by police only after they discovered their error, but by then the…

Order from chaos

Artificial Intelligence, Risk Disclosures, and a New SEC Materiality Test

When Microsoft shared in 2018 that certain “deficiencies” surrounding its artificial intelligence practices could “subject[] us to competitive harm, legal liability, and brand or reputational harm,” it set the stage for other companies to self-report perceived risks associated with developing, using and producing AI technologies. Disclosing risks–a Securities and Exchange Commission (SEC) requirement imposed on public companies since 2005–raises a number of important considerations for both public as well as private companies, including transparency, accuracy, and the degree of speculation that may be acceptable when discussing AI impacts.  Although companies in industry segments other than AI face similar concerns, AI…