In Your Face Artificial Intelligence: Regulating the Collection and Use of Face Data (Part I)

Of all the personal information individuals agree to provide companies when they interact with online or app services, perhaps none is more personal and intimate than a person’s facial features and their moment-by-moment emotional states. And while it may seem that face detection, face recognition, and affect analysis (emotional assessments based on facial features) are technologies only sophisticated and well-intentioned tech companies with armies of data scientists and stack engineers are competent to use, the reality is that advances in machine learning, microprocessor technology, and the availability of large datasets containing face data have lowered entrance barriers to conducting robust face detection, face recognition, and affect analysis to levels never seen before.

In fact, anyone with a bit of programming knowledge can incorporate open-source algorithms and publicly available image data, train a model, create an app, and start collecting face data from app users. At the most basic entry point, all one really needs is a video camera with built-in face detection algorithms and access to tagged images of a person to start conducting facial recognition. And several commercial API’s exist making it relatively easy to tap into facial coding databases for use in assessing other’s emotional states from face data. If you’re not persuaded by the relative ease at which face data can be captured and used, just drop by any college (or high school) hackathon and see creative face data tech in action.

In this post, the uses of face data are considered, along with a brief summary of the concerns raised about collecting and using face and emotional data. Part II will explore options for face data governance, which include the possibility of new or stronger laws and regulations and policies that a self-regulating industry and individual stakeholders could develop.

The many uses of our faces

Today’s mobile and fixed cameras and AI-based face detection and recognition software enable real-time controlled access to facilities and devices. The same technology allows users to identify fugitive and missing persons in surveillance videos, private citizens interacting with police, and unknown persons of interest in online images.

The technology provides a means for conducting and verifying commercial transactions using face biometric information, tracking people automatically while in public view, and extracting physical traits from images and videos to supplement individual demographic, psychographic, and behavioristic profiles.

Face software and facial coding techniques and models are also making it easier for market researchers, educators, robot developers, and autonomous vehicle safety designers to assess emotional states of people in human-machine interactions.

These and other use cases are possible in part because of advances in camera technology, the proliferation of cameras (think smart phones, CCTVs, traffic cameras, laptop cameras, etc.) and social media platforms, where millions of images and videos are created and uploaded by users every day. Increased computer processing power has led to advances in face recognition and affect-based machine learning research and improved the ability of complex models to execute faster. As a result, face data is relatively easy to collect, process, and use.

One can easily image the many ways face data might be abused, and some of the abuses have already been reported. Face data and machine learning models have been improperly used to create pornography, for example, and to track individuals in stores and other public locations without notice and without seeking permission. Models based on face data have been reportedly developed for no apparent purpose other than for predictive classification of beauty and sexual orientation.

Face recognition models are also subject to errors. Misidentification, for example, is a weakness of face recognition and affect-based models. In fact, despite improvements, face recognition is not perfect. This can translate into false positive identifications. Obviously, tragic consequences can occur if the police or government agencies make decisions based on a false positive (or false negative) identity of a person.

Face data models have been shown to perform more accurately on persons with lighter skin color. And affect models, while raising fewer concerns compared to face recognition due mainly to the slower rate of adoption of the technology, may misinterpret emotions if culture, geography, gender, and other factors are not accounted for in training data.

Of course, instances of reported abuse, bias, and data breaches overshadow the many unreported positive uses and machine learning applications of face data. But as is often the case, problems tend to catch the eyes of policymakers, regulators, and legislators, though overreaction to hyped problems can result in a patchwork of regulations and standards that go beyond addressing the underlying concerns and cause unintended effects, such as possibly stifling innovation and reducing competitiveness.

Moreover, reactionary regulation doesn’t play well with fast-moving disruptive tech, such as face recognition and affective computing, where the law seems to always be in catch-up mode. Compounding the governance problem is the notion that regulators and legislators are not crystal ball readers who can see into the future. Indeed, future uses of face data technologies may be hard to imagine today.

Even so, what matters to many is what governments and companies are doing with still images and videos, and specifically how face data extracted from media are being used, sometimes without consent. These concerns raise questions of transparency, privacy laws, terms of service and privacy policy agreements, data ownership, ethics, and data breaches, among others. They also implicate issues of whether and when federal and state governments should tighten existing regulations and impose new regulations where gaps exist in face data governance.

With recent data breaches making headlines and policymakers and stakeholders gathering in 2018 to examine AI’s impacts, there is no better time than now to revisit the need for stronger laws and to develop new technical- and ethical-based standards and guidelines applicable to face data. The next post will explore these issues.

Leave a Reply

Your email address will not be published.