News and Analysis of Artificial Intelligence Technology Legal Issues

Congress, States Introduce New Laws for Facial Recognition, Face Data – Part 2

Congress, States Introduce New Laws for Facial Recognition, Face Data – Part 2

In Part I, new proposed federal and state laws governing the collection, storage, and use of face (biometric) data in connection with facial recognition technology were described.  If enacted, those new laws would join Illinois’ Biometric Information Privacy Act (BIPA), California’s Consumer Data Privacy Act (CCPA), and Texas’ “biometric identifier” regulations in the governance of face-related data.  It is reasonable for businesses to assume that other state laws and regulations will follow, and with them a shifting legal landscape creating uncertainty and potential legal risks.  A thoughtful and proactive approach to managing the risks associated with the use of facial recognition technology will distinguish those businesses that avoid adverse economic and reputational impacts, from those that could face lawsuits and unwanted media attention.

Businesses with a proactive approach to risk management will of course already be aware of the proposed new laws that were described in Part I.  S. 847 (the federal Commercial Facial Recognition Privacy Act of 2019) and H1654-2 (Washington State’s legislative house bill) suggest what’s to come, but biometric privacy laws like those in California, Texas, and Illinois have been around for a while.  Companies that do business in Illinois, for instance, already know that BIPA regulates the collection of biometric data, including face scans, and has created much litigation due to its private right of action provision.  Maintaining awareness of the status of existing and proposed laws will be important for businesses that collect, store, and use face data.

At the same time, however, federal governance of AI technologies under the Trump Administration is expected to favor a policy and standards governance approach over a more onerous command-and-control-type regulatory agency rulemaking approach (which the Trump administration often refers to “barriers”).  The takeaway for businesses is that the rulemaking provisions of S. 847 may look quite different if the legislation makes it out of committee and is reconciled with other federal bills, adding to the uncertain landscape.

But even in the absence of regulations (or at least regulations with teeth) and the threat of private lawsuits (neither S. 847 nor H1654-2 provide a private right of action for violations), managing risk may require businesses that use facial recognition technology, or that directly or indirectly handle face data, or otherwise use the result of a facial recognition technology, to at least minimally self-regulate.  Those that don’t, or those that follow controversial practices involving monetizing face data at the expense of trust, such as obfuscating transparency about how they use consumer data, are more likely to see a backlash.

In most cases, companies handling data already have privacy policies and terms of service (TOS) or end-user licensing agreements that address user data and privacy issues.  Those documents could be frequently reviewed and updated to address face data and facial recognition technology concerns.  Moreover, “camera in use” notices are not difficult to implement in the case of entities that deploy cameras for security, surveillance, or other reasons.  Avoiding legalese and use of vague or uncertain terms in those documents and notices could help reduce risks.  H1654-2 provides that a meaningful privacy notice should include: (a) the categories of personal data collected by the controller; (b) the purposes for which the categories of personal data is used and disclosed to third parties, if any; (c) the rights that consumers may exercise, if any; (d) the categories of personal data that the controller shares with third parties, if any; and (e) the categories of third parties, if any, with whom the controller shares personal data.  In the case of camera notices, prominently displaying the notice is standard, but companies should also be mindful of the differences in S. 847 and H1654-2 concerning notice and implied consent: the former may require notice and separate consent, while the later may provide that notice alone equates to implied consent under certain circumstances.

Appropriate risk management also means business entities that supply face data sets to others, for machine learning development, training, and testing purposes, understand the source of its data and the data’s potential inherent biases.  Those businesses will be able to articulate the same to users of the data (who may insist on certain assurances about the data’s quality and utility if not provided on an as-is basis).  Ignoring potential inherent biases in data sets is inconsistent with a proactive and comprehensive risk management strategy.

Both S. 847 and H1654-2 refer to a human-in-the-loop review process in certain circumstances, such as in cases where a final decision based on the output of a facial recognition technology may result in a reasonably foreseeable and material physical or financial harm to an end user or if it could be unexpected or highly offensive to a reasonable person.  Although “reasonably foreseeable,” “harm,” and “unexpected or highly offensive” are undefined, a thoughtful approach to managing risk and mitigating damages might consider ways to implement human reviews mindful of federal and state consumer protection, privacy, and civil rights laws that could be implicated absent use of a human reviewer.

The White House’s AI technology use policy and S. 847 refer to the National Institute of Standards and Technology (NIST), which could play a large role in AI technology governance.  Learning about NIST’s current standards-setting approach and its AI model evaluation process could help companies seeking to do business with the federal government.  Of course, independent third parties could also evaluate a business’ AI models for bias, problematic data sets, model leakiness, and to identify potential problems that might lead to litigation.  While not every situation may require such extra scrutiny, the ability to recognize and avoid risks might justify the added expense.

As noted above, neither S. 847 nor SB 5376 include a private right of action like BIPA, but the new laws could allow for states attorneys general to bring civil actions against violators.  Businesses should consider the possibility of such legal actions, as well as the other potential risks from the use of facial recognition technology and face data collection when assessing the risk factors that must be discussed in certain SEC filings.

Above are just a few of the factors and approaches that businesses could consider as part of a risk management approach to the use of facial recognition technology in the face of a changing legal landscape.