This article discusses the various risk classifications of AI in the European Union Artificial Intelligence Act, together with their implications and how they intend to balance innovation with accountability.
Background information
The European Union has been in the forefront of developing frameworks to regulate emerging technologies, one of which is artificial intelligence. This effort culminated in the adoption of the first comprehensive regulatory framework in Europe – the Artificial Intelligence Act that became effective on August, 1st 2024. Proposed by the Commission in April 2021 and agreed by the European Parliament and the Council in December 2023, the regulation ensures that AI systems deployed within the Union are safe, lawful, and respectful of fundamental rights.
Central to this legislation is a risk-based classification approach whereby the regulatory requirements for different types of AI systems are established based on their perceived potential to cause harm.
The risk classification system
The AI Act identifies four risk categories for AI systems: unacceptable risk, high risk, limited risk and minimal risk. Each category specifies the types of AI systems including the potential dangers associated with their use, and the corresponding regulatory obligations.
- Unacceptable risk
AI systems presenting a major threat to fundamental rights, safety, and democratic values are strictly prohibited under the AI Act. These include:
- Social scoring systems that evaluate individuals based on their behaviour or characteristics to limit access to services or opportunities.
- Manipulative AI systems that exploit vulnerabilities of users, such as AI targeting children or vulnerable individuals with manipulative content.
- Real-time biometric identification systems for surveillance in public spaces, except within the carefully defined and limited exceptions such as counter-terrorism.
- High risk
High-risk AI systems constitute the most highly regulated category. These systems are not banned but require a strict level of supervision due to their potential for significant harm. They are typically used in critical sectors such as:
- Critical infrastructure: AI systems managing power grids or transportation networks where failures can lead to serious harm.
- Employment and education: Systems used in hiring, evaluating, or assigning grades capable of impacting the livelihoods of individuals.
- Law enforcement and justice systems: AI tools used for predictive policing or for evidentiary assessments in judicial contexts.
- Healthcare: AI used for diagnostics, treatment planning, or assistance in surgery.
- Migration, asylum and border control: AI systems used for the automated processing of visa applications.
Developers of high-risk systems must comply with rigorous requirements, among others, implementing comprehensive risk analyses before launching the product, guaranteeing system transparency for the users, and putting in place mechanisms for human oversight and redress.
- Limited risk
The limited risk category encompasses AI systems such as chatbots, AI applications for marketing or customer segmentation, virtual assistants, etc. These systems do not require extensive oversight; however, developers must ensure that users are made aware of the fact that they are interacting with an AI system.
- Minimal or no risk
AI systems classified as minimal risk include those used for entertainment, personal productivity tools or other uses, such as AI-enabled video games, photo-editing software, or spam filters. These systems pose negligible harm to users and are thus exempt from specific regulatory requirements under the AI Act. However, they must still comply with general EU laws and ethical guidelines in terms of responsible usage.
Challenges and criticisms
While the risk classification system is a pragmatic approach, it is not without challenges:
- Ambiguity in classification: For some specific AI systems, judging whether to label them high risk or limited risk can be complex, especially for newer applications that begin to blur the line.
- Burden on small firms: Compliance requirements for high-risk AI systems have the potential to stifle competition among smaller players as compared with tech giants.
- Global harmonisation: Aligning EU regulations with other jurisdictions, such as the United States or China, is a pressing issue due to AI operating across borders.
Conclusion
The AI act is considered a significant step towards responsible governance of artificial intelligence. By categorising AI systems based on risk, the framework ensures that the level of oversight is proportional to the potential for harm. This nuanced approach protects fundamental rights while fostering innovation within safe and ethical boundaries. Continued dialogues among decision-makers, industry heads, and civil society players will be important as the AI landscape progresses, allowing these rules to be refined and implemented.
Gohar Simonyan
M2 Cyberjustice – Promotion 2024/2025
Sources:
photo: Pixabay
https://eur-lex.europa.eu/legal-content/EN-FR/TXT/?from=EN&uri=CELEX%3A32024R1689
https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
https://kpmg.com/ch/en/insights/artificial-intelligence/eu-ai-act.html?utm_source=chatgpt.com
https://www.iese.edu/insight/articles/artificial-intelligence-europe-innovation-regulation/