Thoughts.

History's worst organised blog.

An EU Project in light of the EU AI Act
The SERMAS Toolkit: a compliant High-risk AI System

June 10, 2024

On the 21st of May 2024, the European Council formally adopted the EU Artificial Intelligence (AI) Act. But what does it cover, and how does it impact solutions, such as the Socially-acceptable Extended Reality Models And Systems (SERMAS) project, which deal with biometric identification and emotion recognition?


According to the official website of the EU Artificial Intelligence Act, the purpose of the drawn regulation is to promote the adoption of human-centric AI solutions that people can trust, ensuring that such products are developed having in mind the health, safety, and fundamental rights of people. As such, the EU AI Act outlines, among others, a set of rules, prohibitions, and requirements for AI systems and their operators.


When analysing something like the result of the SERMAS project, the SERMAS Toolkit, from the EU AI Act’s point of view, we don’t only need to verify whether the designed solution is compliant with the regulations laid down, but also assess if outcomes of the project, such as exploitable results (e.g., a software product, solutions deployed “in the wild”), will be compliant with them.


Parts of the Toolkit are being developed in research institutions, such as SUPSI, TUDa, or KCL, which makes it fall under the exclusion criteria of scientific research and development according to Article 2(6). In addition, since the AI systems and models are not yet placed onto the market or in service, the AI Act regulations are not yet applicable for the Toolkit according to Article 2(8) as well. As a general rule of thumb, if a solution is either being developed solely as a scientific research activity, or is not yet placed on the market, it does not need to go through the administrative process and risk assessment outlined by the Act.


That being said, if a solution involving AI is planned to be put on the market or be provided as a service for those who wish to deploy it, even if it’s open source, it is better to design its components with the regulations in mind and prepare the necessary documentation required for the legal release of the software. This article outlines some of these aspects on the example of the SERMAS Toolkit. Still, it’s important to emphasize that AI systems need to be individually evaluated (and e.g., registered) against the regulations of the Act.


First, we should look at the syntax relevant to the SERMAS Toolkit. These terms are all outlined in Article 3 of the regulations. To begin with, the SERMAS Toolkit is considered an “AI system” since it is a “machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for ... implicit objectives, infers, from the input it receives” (Article 3(1)). A part of the SERMAS Toolkit is a virtual agent whose facial expressions and behaviour are adjusted based on how it perceives users, and since it is capable of user identification. However, the system is not fully autonomous as it has predefined knowledge and behaviour that it exhibits which is not governed by AI systems. Therefore, it is considered an AI system that has varied levels of autonomy. Speaking of user identification, the Toolkit also deals with:


  • “biometric data” - “personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, such as facial images” (Article 3(34));
  • “biometric identification” - “the automated recognition of physical, physiological, behavioural, or psychological human features for the purpose of establishing the identity of a natural person by comparing biometric data of that individual to biometric data of individuals stored in a database” (Article 3(35));
  • and has a component that is an “emotion recognition system” - “an AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data” (Article 3(39)).


While Article 6 outlines that systems that have biometric identification and emotion recognition count as High-Risk AI systems, the SERMAS Toolkit is only considered to be one due to the emotion recognition, since Annex III states that “biometric identification systems ... shall not include AI systems intended to be used for biometric verification the sole purpose of which is to confirm that a specific natural person is the person he or she claims to be” (Annex III(1a)) - which is the only use-case of biometric identification. However, Article 6(2) and Annex III also specify that an AI system is considered high risk if it is “intended to be used for emotion recognition” (Annex III(1c)) to any extent. While according to the EU AI Act, an AI system having emotion recognition would be prohibited, the SERMAS Toolkit and any other system can be deployed if a review body finds that it does not negatively affect its users (e.g., cause harm or discriminate against them). Highlighting factors relevant to the SERMAS Toolkit, each AI system is evaluated based on the “intended purpose of the AI system”, the extent of its use, the amount of data it processes, and the extent to which it acts autonomously (Article 7(2a-d)). Moreover, the review body evaluates the potential of harm or people suffering an adverse impact by using the system, and the possibility of a person overriding a “decision or recommendation that may lead to potential harm” (Article 7(2g)). Finally, “the magnitude and likelihood of benefit of the deployment of the AI system for individuals, groups, or society at large, including possible improvements in product safety” (Article 7(2j)) is also taken into account.


But how does the SERMAS Toolkit compare to the above criteria? To begin with, it only processes the absolute necessary, minimum amount of data and is undergoing multiple reviews during its conception to ensure the secure management of any identifiable personal data. Moreover, it only uses user identification for enhancing a building’s security in an access-control scenario. And finally, it solely uses emotion recognition with the aim of providing and enhancing user experience by adapting the behaviour of a virtual agent. This tailored experience does not change the output of the agent, only its body language and facial expressions – which can only take shape as neutral to positive nonverbal features. As such, the agent does not and can not discriminate against people.


So, while the Toolkit is not a prohibited solution after all, it is still considered a High-risk system, which means that upon deployment, or when provided to deployers, some regulations need to be complied with, as outlined by the Act. For instance, a risk-management system needs to be implemented and periodically revised with focus on the system’s biometric identification and emotion recognition sub-systems (Article 9(1-2)). Since these sub-systems are model-based classification or identification methods, the regulations outlined by Article 10(1-6) for the training, validation, and testing of the underlying models and datasets should be followed (e.g., datasets used for training the models should be ethically collected, with as few errors as possible, and without bias). Moreover, a thorough technical documentation is expected to be published and updated according to Article 11(1), and the system should come with appropriate logging of interactions (Article 12(3)) and overseeing tools (Article 14(1-2)), especially during identification processes. Lastly, when the Toolkit is put on the market, which is part of the exploitable results of the project, it needs to undergo assessment and is required to be registered in a database with its accompanying documentation (Article 14(1-2)).


In conclusion, the EU AI Act means that AI systems dealing with personal data, especially when it comes to emotion recognition or identification that affects how an autonomous system operates, need to comply with the laid-out regulations. Solutions put in service or on the market may be classified as High-risk and would be prohibited by default. However, a thorough assessment procedure, compliance with additional security requirements, logging, and documentation may mean that an AI system can be cleared to be deployed after all. As for nonprofessional or scientific research purpose systems, such as the still under-development SERMAS Toolkit, they are encouraged to voluntarily comply with the requirements outlined by the Act (Recital 109), and in the long run, they will benefit for being developed with the requirements in mind.


The EU AI Act can be explored in detail here, and to get a quick assessment of whether an AI system that will be put on the market or in service is affected by it, and to what extent, they also provide this compliance checker tool.

Written for the SERMAS project
- see here
Back to top