The nation’s consumer protection watchdog–the Federal Trade Commission (FTC)–took extraordinary law enforcement measures on January 11, 2021, after finding an artificial intelligence company had deceived customers about its data collection and use practices. In a first of its kind settlement involving facial recognition surveillance systems, the FTC ordered Everalbum,...
Recent News and Analysis
Artificial Intelligence, GANs, and the law of Synthetic Data: Lawmakers React to False Media Content
It didn’t take long for someone to turn generative adversarial networks (GAN)–a machine learning technique that at first blush seemed benign and of somewhat limited utility at its unveiling–into a tool with the ability to cause real harm. Now, Congress has stepped up and passed legislation to focus the...
A Look Into the Future of AI Governance
The year 2020 may be remembered for its pandemic and presidential election. But it also marked a turning point in efforts to regulate artificial intelligence (AI) technologies and the systems that embody them. State lawmakers in two states joined Illinois in enacting laws directed at AI-generated biometric data, and...
Eliminating Structural Bias in Data-Based Technologies: A Path Forward
As national protests over systemic and structural racism in America continue, community organizers, Black scholars, and others fighting injustice and unequal treatment are once again raising awareness of a long-standing problem lurking within artificial intelligence data-based technologies: bias. The campaign to root out bias–or eliminate biased systems altogether–has been...
EU’s New Plan for Regulating Artificial Intelligence: What US Companies Should Know
On February 19, 2020, the European Union Commission issued a plan for regulating high-risk artificial intelligence (AI) technologies developed or deployed in the EU. Calling it a “White Paper on Artificial Intelligence: a European Approach to Excellence and Trust,” the plan was published along with a companion “European Strategy...
Artificial Intelligence, Risk Disclosures, and a New SEC Materiality Test
When Microsoft shared in 2018 that certain “deficiencies” surrounding its artificial intelligence practices could “subject us to competitive harm, legal liability, and brand or reputational harm,” it set the stage for other companies to self-report perceived risks associated with developing, using and producing AI technologies. Disclosing risks–a Securities and...