News and Analysis of Artificial Intelligence Technology Legal Issues
Europan Commision on Artificial Intelligence logo

Approaching Applicability of Europe’s Proposed AI Regulations as a Classification Task

The European Commission cast a wide regulatory net over artificial intelligence (AI) technologies and practices last month when it proposed new rules for AI on April 21, 2021 (link to PDF here). For those in the U.S. wondering whether the future rules might apply to them, the answer may be found following a regulatory applicability assessment (RAA), which is essentially a feature-driven classification problem, though a complex and legal one. That is, for a given AI system or practice, one must determine whether it is a “prohibited,” “high risk,” “limited risk,” or “minimal risk” system/practice based on the definitions of…

Ursula von der Leyen EU Commission President

Proposed New EU AI Regulations: A Pre-Planning Guide for U.S. In-House Counsel

If the European Commission’s newly proposed harmonized rules on Artificial Intelligence (the “Artificial Intelligence Act”) (published April 21, 2021) are adopted, U.S.-based AI companies operating in European Union (EU) countries (or expecting to do so) may soon be subject to significant new regulatory requirements. The proposed regulations (available PDF here), with few exceptions, would apply to companies or individuals (“providers”) who place on the market or put into service certain high-risk AI systems in the EU, “users” (including companies) of those AI systems who are located in the EU, and providers and users of such AI systems that are located…

JSL The Journal of Science and Law

Artificial Intelligence and Trust: Improving Transparency and Explainability Policies to Reverse Data Hyper-Localization Trends

In this peer-reviewed article (Journal of Science and Law; open source), my co-author and I discuss how access to data is an essential part of artificial intelligence (AI) technology development efforts. But government and corporate actors have increasingly imposed localized and hyper-localized restrictions on data due to rising mistrust—the fear and uncertainty about what countries and companies are doing with data, including perceived and real efforts to exploit user data or create more powerful and possibly dangerous AI systems that could threaten civil rights and national security. If the trend is not reversed, over-restriction could impede AI development to the…

Fed Trade Commission roundel logo

FTC Orders AI Company to Delete its Model Following Consumer Protection Law Violation

The nation’s consumer protection watchdog–the Federal Trade Commission (FTC)–took extraordinary law enforcement measures on January 11, 2021, after finding an artificial intelligence company had deceived customers about its data collection and use practices. In a first of its kind settlement involving facial recognition surveillance systems, the FTC ordered Everalbum, Inc., the now shuttered maker of the “Ever” photo album app and related website, to delete or destroy any machine learning and other models or algorithms developed in whole or in part using biometric information it unlawfully collected from users, along with the biometric data itself. In doing so, the agency…

US Capitol Building

Artificial Intelligence, GANs, and the law of Synthetic Data: Lawmakers React to False Media Content

It didn’t take long for someone to turn generative adversarial networks (GAN)–a machine learning technique that at first blush seemed benign and of somewhat limited utility at its unveiling–into a tool with the ability to cause real harm.  Now, Congress has stepped up and passed legislation to focus the federal government’s attention on the technology.  If signed by the president, the legislation will require two federal agencies to study the role GANs play in producing false media content and report their findings back to respective House and Senate committees, which is seen as a prelude to possible notice-and-comment regulations and…

Crystal ball

A Look Into the Future of AI Governance

The year 2020 may be remembered for its pandemic and presidential election. But it also marked a turning point in efforts to regulate artificial intelligence (AI) technologies and the systems that embody them. State lawmakers in two states joined Illinois in enacting laws directed at AI-generated biometric data, and federal lawmakers introduced their own measure.  The White House in January began exploring frameworks for governing AI.  Still, the AI legal landscape remains uncertain especially for stakeholders who develop and use AI systems and want more predictability so they can properly manage legal liability risks. In this post, a time frame…

Equality for minority

Eliminating Structural Bias in Data-Based Technologies: A Path Forward

As national protests over systemic and structural racism in America continue, community organizers, Black scholars, and others fighting injustice and unequal treatment are once again raising awareness of a long-standing problem lurking within artificial intelligence data-based technologies: bias. The campaign to root out bias–or eliminate biased systems altogether–has been amplified in recent weeks in the wake of reports of a black Michigan man who was apparently arrested solely on the misidentification decision of a facial recognition system used by law enforcement.  Criminal charges against the man were dropped by police only after they discovered their error, but by then the…

EU’s New Plan for Regulating Artificial Intelligence: What US Companies Should Know

On February 19, 2020, the European Union Commission issued a plan for regulating high-risk artificial intelligence (AI) technologies developed or deployed in the EU. Calling it a “White Paper on Artificial Intelligence: a European Approach to Excellence and Trust,” the plan was published along with a companion “European Strategy for Data” and follows an earlier “AI Strategy” (2018) and AI-specific ethical guidelines (April 2019). In addition to presenting a framework for regulating “AI applications” in the EU, the Commission’s plan focuses on creating and organizing an ecosystem, encouraging cooperation among member states and institutions, creating infrastructure changes, and providing for…

Order from chaos

Artificial Intelligence, Risk Disclosures, and a New SEC Materiality Test

When Microsoft shared in 2018 that certain “deficiencies” surrounding its artificial intelligence practices could “subject[] us to competitive harm, legal liability, and brand or reputational harm,” it set the stage for other companies to self-report perceived risks associated with developing, using and producing AI technologies. Disclosing risks–a Securities and Exchange Commission (SEC) requirement imposed on public companies since 2005–raises a number of important considerations for both public as well as private companies, including transparency, accuracy, and the degree of speculation that may be acceptable when discussing AI impacts.  Although companies in industry segments other than AI face similar concerns, AI…

First They Wanted Data, Now Cyber Thieves Are After Deep Learning Models: Legal Response Options

Imagine you’ve spent months developing and deploying a revenue-generating deep neural network model only to discover that an attacker has stolen the model’s knowledge and will soon offer a service that will steer potential users away from yours.  Flashes of late nights and weekends spent collecting and cleaning data cross your mind, accompanied by a sinking feeling when you think about the significant monetary investment made in computation power.  The joy you felt finding just the right hyperparameters that made the model unique and, you hoped, lucrative are now in the past.  After second-guessing what technical measures could have prevented…