The Analysis of EU AI Act
by
Aaryan SaNADHYA ,Intern, Seth Associates
Keywords- artificial intelligence, cyberlaw, cybersecurity
EU’s AI Act is the first of its kind AI legislation with an explicit objective of regulating AI systems, safeguarding the human rights, rule of law and democracy and simultaneously increase innovation and investment in the emerging AI industry.
Classification of AI systems and Corresponding Obligations
The European Artificial Intelligence Act introduces a broad risk based regulatory approach for AI systems and categorises them into four levels of risk: unacceptable, high, limited and minimal. These categories are mainly based on their technological features and broad area of applications.
Firstly, “unacceptable risk” AI systems are the ones which are considered as posing highest degree of threat to people and will be proscribed.[1] This includes AI systems which:
- Purposefully employ manipulative technique with the object or effect of distorting the behaviour of the person(s) or some specific vulnerable groups by impairing their ability to make an informed decision
- Evaluate and subsequently categorise person(s) based on their social behaviour which may lead to discrimination against person(s) in social contexts to which the data did not originally belong or the unfavourable treatment ensuing from disproportionate or erroneous inferences from the data.
- Assesses or predicts the likelihood of a natural person committing a criminal offence based solely on personality traits and characteristics or profiling
- Constructs or enlarges the already existing databases of facial recognition by indiscriminate scraping from web or CCTV footage.
- Infers emotions of natural person in work place and educational institutions
- Categorises individual based on their biometric data to predict or infer inter-alia their race, sexual orientation, religion
- Real time biometric Identification System in publicly accessible spaces with certain exceptions
Secondly, “high risk” AI are allowed with highest regulatory measures.[2] Essentially this category encompasses safety features for regulated products and standalone AI systems within defined domains. These systems have the potential to cause significant harm if they malfunction or are misused, posing risks to health, safety, fundamental rights, or the environment. There are three ways by which an AI system may qualify for this category:
- When AI system is itself a certain type of product
- When the AI system is a safety component of a certain type of product
- AI systems could also be considered ‘high-risk’ AI systems if they meet the description of any of the AI systems listed in a Annexure III to the EU AI Act.
All AI systems falling under this category will be assessed for safety before being put on the market and also throughout their lifecycle.[3] Even general public will have a right to file representations against any such AI systems. The Act will be brought into force wef Aug 2024 .However, majority of its provisions will be effective from Aug 2026.
The third level of risk is “limited risk”, which covers AI systems which run the risk of manipulation or deceit. AI systems falling under this category have the burden of being transparent, meaning that humans must be informed about their interaction with the AI (unless this is obvious), and any deep fakes should be denoted as such. For example, chatbots classify as limited risk. This is particularly relevant for generative AI systems and its content which has lately become synonymous with AI for a common man.
Lastly, AI systems associated with lowest level of risks are categorised as “minimal risk”. This level includes all other AI systems that do not fall under the above-mentioned categories, such as a spam filter. AI systems under minimal risk shall not have any restrictions or mandatory obligations. However, it is suggested to follow general principles such as human oversight, non-discrimination, and fairness.
Aspects that it failed TO address or provide a satisfying Answer
- The EU AI Act fails to regulate and ensure that AI system providers based in EU whose products have an impact outside of the EU are subject to same level of restrictions and compliance standards and it doesn’t restrict EU based providers from exporting “unacceptable AI systems” outside its territory.
- It lacks a mandatory basic AI safety standard and lets the “minimal risk” AI systems scot-free
- The Act fails to fully appreciate the risks regarding open-source models and with release of powerful AI models as open-source is a concern for public safety and unregulated access can lead to abuse, including malware and terrorism.
- Though the Act categorises generative AIs such as deep fakes as “limited risks” system but on the other hand has kept any system which identifies such deep fakes as “high risks” system.
- Retrospective facial recognition has not been prohibited and poses an alarming risk.
Obligations of High-Risk AI Providers:
Under the EU AI Regulation, developers and providers of high-risk AI systems have important responsibilities to ensure their technologies are safe, reliable, and ethical.[4] They must carefully assess risks and check compatibility to meet strict legal standards. This includes documenting[5] and clearly explaining what their AI systems can do, their limits, and any potential risks to users and others involved.
Developers also need to set up systems where humans can oversee and make decisions about high-risk AI operations[6], ensuring they can step in when needed. After deploying their AI systems, continuous monitoring[7] is crucial to stay compliant with regulations and quickly fix any issues that come up. Incident reporting systems are required to help address problems promptly and keep the operations transparent.
Keeping detailed records of audits and working closely with regulators shows a commitment to being accountable and open about how high-risk AI technologies are used. These steps aim to build trust in how AI is developed and used responsibly across the EU, ensuring it benefits society while protecting our rights and values.
Responsibilities of Generative AI Developers
Generative AI providers have an important responsibility to ensure that their technology is used ethically and safely.[8] A strict regulatory framework must be adhered to, with priority given to regulation of AI development and implementation. Ethical considerations are paramount, requiring providers to proactively address potential biases, protect user privacy, and support transparency in authority and risks associated with their AI systems. This transparency teaches users and stakeholders how to interact responsibly with AI-developed products and robust data protection policies throughout the AI lifecycle.
Continuous monitoring and accountability mechanisms are essential for providers to promptly address any issues that arise and to maintain trust. Communication with stakeholders, including law enforcement and industry counterparts, helps maintain compliance and establish ethical standards. Routine impact assessments help assess the broader social implications of generative AI, guide providers in reducing risk and maximizing positive outcomes. If this responsibility managed properly, the provision of Gen AI would facilitate the responsible and beneficial integration of AI technologies, and promotes innovation while protecting social values and human rights.
Bibliography:
- Bertin Martens, “The European Union AI Act: premature or precocious regulation?”, published on 7th March, 2024, Bruegel, < https://www.bruegel.org/sites/default/files/2024-03/the-european-union-ai-act%3A-premature-or-precocious-regulation%3F–9793_0.pdf>
- Lewis Silkin, “EU AI Act:101 – An In-depth Analysis of Europe’s AI Regulatory Framework”, published on 28th March, 2024, < https://www.lewissilkin.com/en/insights/ed-eu-ai-act101-an-in-depth-analysis-of-europes-ai-regulatory-framework>
- Article 19, “EU: AI Act fails to set gold standard for human rights” published on 4th April 2024, < https://www.article19.org/resources/eu-ai-act-fails-to-set-gold-standard-for-human-rights/#:~:text=The%20public%20will%20not%20even,constructive%20public%20oversight%20and%20scrutiny.>
- Philipp Hacker, “What’s Missing from the EU AI Act”, published on 13th December 2023, Verfassungsblog, <https://verfassungsblog.de/whats-missing-from-the-eu-ai-act/>
- Fredrica Paolucci, “Shortcomings of the AI Act”, published on 14th March 2024, Verfassungsblog, <https://verfassungsblog.de/shortcomings-of-the-ai-act/>
- Louis Holbrook, “The EU Artificial Intelligence Act and its Human Rights Limitations”, published on April 11, 2023, Oxford Human Rights Hub, < https://ohrh.law.ox.ac.uk/the-eu-artificial-intelligence-act-and-its-human-rights-limitations/>
- Pinset Mansons, “A guide to high-risk AI systems under the EU AI Act”, published on 13th feb 2024, <https://www.pinsentmasons.com/out-law/guides/guide-to-high-risk-ai-systems-under-the-eu-ai-act>
[1] Article 5 of EU AI act.
[2] Article 6 of EU AI Act
[3] Article 8 of EU AI Act
[4] Article 16 of EU AI act
[5] Article 18 of EU AI act
[6] Article 14 of EU AI act
[7] Article 9 of EU AI act
[8] Article 53 of EU AI act