News and Analysis of Artificial Intelligence Technology Legal Issues
Image of courthouse steps

Are Europe’s Proposed AI Regulations Tough Enough?

The European Commission’s proposed new regulations for artificial intelligence (AI) technologies and systems (link to PDF here; issued April 21, 2021) include enforcement provisions that would empower public authorities to monitor regulated AI entities operating in the European Union (EU) and seek stiff fines from those that do not comply with the rules. The proposed regulations would also grant authorities the power to impose non-monetary penalties, including ordering offending companies to remove their AI systems from the EU market. These are some tough measures, assuming public authorities exercise their discretion in a way that actually incentivizes compliance and positive behavior.…

Europan Commision on Artificial Intelligence logo

Approaching Applicability of Europe’s Proposed AI Regulations as a Classification Task

The European Commission cast a wide regulatory net over artificial intelligence (AI) technologies and practices last month when it proposed new rules for AI on April 21, 2021 (link to PDF here). For those in the U.S. wondering whether the future rules might apply to them, the answer may be found following a regulatory applicability assessment (RAA), which is essentially a feature-driven classification problem, though a complex and legal one. That is, for a given AI system or practice, one must determine whether it is a “prohibited,” “high risk,” “limited risk,” or “minimal risk” system/practice based on the definitions of…

Ursula von der Leyen EU Commission President

Proposed New EU AI Regulations: A Pre-Planning Guide for U.S. In-House Counsel

If the European Commission’s newly proposed harmonized rules on Artificial Intelligence (the “Artificial Intelligence Act”) (published April 21, 2021) are adopted, U.S.-based AI companies operating in European Union (EU) countries (or expecting to do so) may soon be subject to significant new regulatory requirements. The proposed regulations (available PDF here), with few exceptions, would apply to companies or individuals (“providers”) who place on the market or put into service certain high-risk AI systems in the EU, “users” (including companies) of those AI systems who are located in the EU, and providers and users of such AI systems that are located…

JSL The Journal of Science and Law

Artificial Intelligence and Trust: Improving Transparency and Explainability Policies to Reverse Data Hyper-Localization Trends

In this peer-reviewed article (Journal of Science and Law; open source), my co-author and I discuss how access to data is an essential part of artificial intelligence (AI) technology development efforts. But government and corporate actors have increasingly imposed localized and hyper-localized restrictions on data due to rising mistrust—the fear and uncertainty about what countries and companies are doing with data, including perceived and real efforts to exploit user data or create more powerful and possibly dangerous AI systems that could threaten civil rights and national security. If the trend is not reversed, over-restriction could impede AI development to the…

US Capitol Building

Artificial Intelligence, GANs, and the law of Synthetic Data: Lawmakers React to False Media Content

It didn’t take long for someone to turn generative adversarial networks (GAN)–a machine learning technique that at first blush seemed benign and of somewhat limited utility at its unveiling–into a tool with the ability to cause real harm.  Now, Congress has stepped up and passed legislation to focus the federal government’s attention on the technology.  If signed by the president, the legislation will require two federal agencies to study the role GANs play in producing false media content and report their findings back to respective House and Senate committees, which is seen as a prelude to possible notice-and-comment regulations and…

Crystal ball

A Look Into the Future of AI Governance

The year 2020 may be remembered for its pandemic and presidential election. But it also marked a turning point in efforts to regulate artificial intelligence (AI) technologies and the systems that embody them. State lawmakers in two states joined Illinois in enacting laws directed at AI-generated biometric data, and federal lawmakers introduced their own measure.  The White House in January began exploring frameworks for governing AI.  Still, the AI legal landscape remains uncertain especially for stakeholders who develop and use AI systems and want more predictability so they can properly manage legal liability risks. In this post, a time frame…

Equality for minority

Eliminating Structural Bias in Data-Based Technologies: A Path Forward

As national protests over systemic and structural racism in America continue, community organizers, Black scholars, and others fighting injustice and unequal treatment are once again raising awareness of a long-standing problem lurking within artificial intelligence data-based technologies: bias. The campaign to root out bias–or eliminate biased systems altogether–has been amplified in recent weeks in the wake of reports of a black Michigan man who was apparently arrested solely on the misidentification decision of a facial recognition system used by law enforcement.  Criminal charges against the man were dropped by police only after they discovered their error, but by then the…

Order from chaos

Artificial Intelligence, Risk Disclosures, and a New SEC Materiality Test

When Microsoft shared in 2018 that certain “deficiencies” surrounding its artificial intelligence practices could “subject[] us to competitive harm, legal liability, and brand or reputational harm,” it set the stage for other companies to self-report perceived risks associated with developing, using and producing AI technologies. Disclosing risks–a Securities and Exchange Commission (SEC) requirement imposed on public companies since 2005–raises a number of important considerations for both public as well as private companies, including transparency, accuracy, and the degree of speculation that may be acceptable when discussing AI impacts.  Although companies in industry segments other than AI face similar concerns, AI…

Computer display in front of wall of photo images

Recent Court Decisions Boost the Outlook for Artificial Intelligence Patents

Machine learning enthusiasts have long touted the technology’s ability to perform–and sometimes exceed–human mental endeavors, such as identifying objects in images, generating a portrait painting, deciding to grant a loan application, optimizing a route to a destination, and efficiently responding to website visitor or customer queries. In recent years, such computerized “mental processes” have been denied patent protection, a trend underscored by U.S. federal district and Federal Circuit patent decisions issued in the wake of the U.S. Supreme Court’s seminal Alice Corp. v. CLS Bank Int’l opinion in 2014, which provided today’s legal framework for determining whether an invention is…

Distributed Artificial Intelligence Systems, Edge Computing, and the Extraterritoriality Doctrine: Testing the Reach of State Privacy Laws

In Patel v. Facebook, a three-judge panel of the U.S. Court of Appeals for the 9th Circuit affirmed a decision by the U.S. District Court for the Northern District of California granting class certification to users of Facebook who alleged that Facebook’s collecting and storing of their face scans using facial recognition technology violated Illinois’s Biometric Information Privacy Act (“BIPA”). In doing so, the panel, based in San Francisco, relied on BIPA’s legislative history to conclude that, “it is reasonable to infer that the [Illinois] General Assembly contemplated BIPA’s application to individuals who are located in Illinois, even if some…