News and Analysis of Artificial Intelligence Technology Legal Issues

Approaching Applicability of Europe’s Proposed AI Regulations as a Classification Task

Approaching Applicability of Europe’s Proposed AI Regulations as a Classification Task

The European Commission cast a wide regulatory net over artificial intelligence (AI) technologies and practices last month when it proposed new rules for AI on April 21, 2021 (link to PDF here). For those in the U.S. wondering whether the future rules might apply to them, the answer may be found following a regulatory applicability assessment (RAA), which is essentially a feature-driven classification problem, though a complex and legal one. That is, for a given AI system or practice, one must determine whether it is a “prohibited,” “high risk,” “limited risk,” or “minimal risk” system/practice based on the definitions of those categories given in the Regulations and on the EC’s website. (Also, the proposed Regulations would apparently not apply to private, non-professional uses of AI technologies, so the classification scheme could also include “non-regulated,” for lack of a better label).  Here is how the proposed Regulations and EC currently define AI systems and practices by class or category:

    • Prohibited: AI systems and practices classified as prohibited are those that possess or exhibit unacceptable risk of infringement of the fundamental rights of others.  Title II of the proposed Regulations describes the prohibited AI systems and practices, in some cases using functional and results-oriented language (some of which is ambiguous and could lead to uncertainty about a company’s status under the Regulations). Example prohibited AI systems and practices include certain remote biometric surveillance applications by law enforcement (with the exception of certain strictly necessary uses).
    • High Risk: AI systems and practices classified as high risk are identified in Title III of the proposed Regulations. Generally, they include those that create a high risk of adverse impact on the health and safety or fundamental rights of natural persons, taking into account a system’s functions and the specific purposes and modalities for which the system is to be used. Specifically-identified high risk systems are listed in Annex III (which may be updated as new high risk systems are identified by regulatory authorities), and include:
      • systems and practices used in critical infrastructure;
      • educational or vocational training;
      • product safety;
      • employment, workers management and access to self-employment;
      • essential private and public services;
      • certain law enforcement practices;
      • migration, asylum and border control management; and
      • administration of justice and democratic processes.
    • Limited Risk: According to the EC’s website, AI systems classified as limited risk include those with a clear risk of manipulation, such as chatbots. In such cases, providers may be subject to basic obligations under the rules.
    • Minimal Risk: The EC predicts that most AI systems and practices will be classified as having a minimal risk of adverse impacts on citizen’s rights or safety. Minimal risk AI systems include AI-enabled video games and spam filters.

Determining which of the above classifications one’s AI system and practices most likely falls in is the goal of conducting the RAA. Obviously, the person overseeing the assessment should fully document the data and information used in reaching the decision, especially when the assessment relies on subjective assumptions or rule interpretations.  Moreover, an initial classification is not the end of the RAA process. Making changes to an AI system or practices (e.g., a system’s network architecture or its intended purpose) after the effective date of the Regulations may result in the updated system or practices being reclassified.  For example, a limited risk AI system could be modified such that in its new configuration it should fairly be reclassified as high risk.  Likewise, a technological change made to a high risk system could place it in a lower risk classification.

Why perform a risk-based classification assessment now, given that the proposed Regulations are potentially many months from being implemented? For the simple reason that learning about potential future regulatory obligations may help direct near-term planning efforts and resource spending.  for example, knowing in advance whether an AI system might be subject to transparency requirements in the future could allow for better planning.  Conducting an RAA now can also provide insight into other area of one’s business. For example, a company may want to assess the adequacy of its insurance coverage or the sufficiency of existing financial risk factor assessments (which some public U.S. companies are required to produce).  Others may wish to assess the capabilities of in-house resources to address potentially burdensome regulatory requirements. Still others may want to know whether making changes to an AI system and practices now could avoid the Regulations altogether.