News and Analysis of Artificial Intelligence Technology Legal Issues
US Capitol Building

Artificial Intelligence, GANs, and the law of Synthetic Data: Lawmakers React to False Media Content

It didn’t take long for someone to turn generative adversarial networks (GAN)–a machine learning technique that at first blush seemed benign and of somewhat limited utility at its unveiling–into a tool with the ability to cause real harm.  Now, Congress has stepped up and passed legislation to focus the federal government’s attention on the technology.  If signed by the president, the legislation will require two federal agencies to study the role GANs play in producing false media content and report their findings back to respective House and Senate committees, which is seen as a prelude to possible notice-and-comment regulations and…

Crystal ball

A Look Into the Future of AI Governance

The year 2020 may be remembered for its pandemic and presidential election. But it also marked a turning point in efforts to regulate artificial intelligence (AI) technologies and the systems that embody them. State lawmakers in two states joined Illinois in enacting laws directed at AI-generated biometric data, and federal lawmakers introduced their own measure.  The White House in January began exploring frameworks for governing AI.  Still, the AI legal landscape remains uncertain especially for stakeholders who develop and use AI systems and want more predictability so they can properly manage legal liability risks. In this post, a time frame…

Equality for minority

Eliminating Structural Bias in Data-Based Technologies: A Path Forward

As national protests over systemic and structural racism in America continue, community organizers, Black scholars, and others fighting injustice and unequal treatment are once again raising awareness of a long-standing problem lurking within artificial intelligence data-based technologies: bias. The campaign to root out bias–or eliminate biased systems altogether–has been amplified in recent weeks in the wake of reports of a black Michigan man who was apparently arrested solely on the misidentification decision of a facial recognition system used by law enforcement.  Criminal charges against the man were dropped by police only after they discovered their error, but by then the…

EU’s New Plan for Regulating Artificial Intelligence: What US Companies Should Know

On February 19, 2020, the European Union Commission issued a plan for regulating high-risk artificial intelligence (AI) technologies developed or deployed in the EU. Calling it a “White Paper on Artificial Intelligence: a European Approach to Excellence and Trust,” the plan was published along with a companion “European Strategy for Data” and follows an earlier “AI Strategy” (2018) and AI-specific ethical guidelines (April 2019). In addition to presenting a framework for regulating “AI applications” in the EU, the Commission’s plan focuses on creating and organizing an ecosystem, encouraging cooperation among member states and institutions, creating infrastructure changes, and providing for…

Order from chaos

Artificial Intelligence, Risk Disclosures, and a New SEC Materiality Test

When Microsoft shared in 2018 that certain “deficiencies” surrounding its artificial intelligence practices could “subject[] us to competitive harm, legal liability, and brand or reputational harm,” it set the stage for other companies to self-report perceived risks associated with developing, using and producing AI technologies. Disclosing risks–a Securities and Exchange Commission (SEC) requirement imposed on public companies since 2005–raises a number of important considerations for both public as well as private companies, including transparency, accuracy, and the degree of speculation that may be acceptable when discussing AI impacts.  Although companies in industry segments other than AI face similar concerns, AI…