AI Act: A Step Towards the First Rules on Artificial Intelligence
AI Regulations

AI Act: A Step Towards the First Rules on Artificial Intelligence

Stefano Bargagni

An update on the Emotion AI Debate in 2023: Regulatory Proposals and the Impact of MorphCast Technology.

Press Release

Summary of the content of the press release in object

Press release on the Project of Compromise Amendments of 16/5/2023 concerning the proposal for a regulation of the European Parliament and Council on harmonized rules on Artificial Intelligence (AI Act) and amendments to some Union legislative acts.

The amendments include bans on invasive and discriminatory uses of AI systems, such as real-time biometric recognition systems in publicly accessible spaces, biometric categorization systems using sensitive characteristics, and emotion recognition systems in the context of law enforcement, border management, workplaces, and educational institutions.

Furthermore, parliamentarians have extended the classification of high-risk areas to include harm to people’s health, safety, fundamental rights, or the environment. Providers of baseline models – a new and rapidly evolving field in AI – would have the obligation to ensure robust protection of fundamental rights, health and safety, the environment, democracy, and the rule of law.

To promote innovation in AI, parliamentarians have added exemptions to these rules for research activities and AI components provided with open-source licenses. The new regulation promotes regulatory sandboxes, namely controlled environments, established by public authorities to test AI before its deployment.

Quotes on AI Act

Brando Benifei (Socialists & Democrats, Italy), co-rapporteur, stated after the vote: “We are on the verge of enacting landmark legislation that must stand the test of time. It is crucial to build citizens’ trust in the development of AI, define the European method for addressing the extraordinary changes already underway, and drive the global political debate on AI. We are convinced that our text balances the protection of fundamental rights with the need to provide legal certainty to businesses and stimulate innovation in Europe.”

Co-rapporteur Dragos Tudorache (Renew, Romania) said: “Given the importance of the transformation that AI will have on our societies and economies, the AI Act is arguably the most important piece of legislation of this term. It is the first law of its kind globally, meaning the EU can lead the way to make AI human-centered, trustworthy, and secure. We have worked to support AI innovation in Europe and give startups, SMEs, and industry room to grow and innovate, while protecting fundamental rights, strengthening democratic oversight, and ensuring a mature system of AI governance and enforcement.”

Evidence on AI Act

I meticulously examined the 144-page document approved by the European Parliament cited in the press release, aiming to identify possible implications and any appropriate measures to be taken now. However, I believe it is still too early to formulate definitive conclusions, as these proposals, subject to possible changes over time, have not yet been formalized into law.

Here are the parts of the document that are interesting for us:

Page 140 (34) Definition

“Emotion recognition system”: an AI system for the purpose of identifying or inferring emotions, thoughts, mental states, or intentions of individuals or groups based on their biometric data or data based on their biometrics;

Page 112 (33)

The article emphasizes that biometric data, being a special category of sensitive personal data, should lead to classifying as high-risk many critical use cases of systems based on biometrics. AI systems intended for the biometric identification of natural persons and those intended to make inferences about personal characteristics of people based on biometric data, including emotion recognition systems, should therefore be classified as high risk.

Impact for MorphCast

MorphCast, as a provider of AI-based emotion recognition technologies, could be classified as high risk according to this regulation. This means that MorphCast might be subject to more stringent regulations, compliance assessments, and potential limitations in the use of its products. However, if MorphCast manages to demonstrate compliance and safety, the high-risk classification could also provide the opportunity to gain greater trust and acceptance in the market, distinguishing itself from other providers who fail to meet such standards.

Page 128 Article 5

Article 5 lists a series of practices related to artificial intelligence that will be prohibited. These include the use of AI systems that employ subliminal or manipulative techniques, that exploit the vulnerabilities of a specific individual or group, or that use biometric categorization systems based on sensitive or protected attributes. The article also prohibits the use of AI systems for social scoring or risk assessment of individuals or groups, and the use of AI systems to infer a person’s emotions in specific contexts such as law enforcement, border management, the workplace, and educational institutions.

Impact for MorphCast

If MorphCast’s systems are used to infer a person’s emotions in contexts such as law enforcement, border management, the workplace, and educational institutions, these uses would be prohibited under Article 5. The company, if these indications are included in the final law, should therefore be cautious about how and where its products are used, to ensure that they do not violate the new rules. For example, by using an ethical code and overseeing that it is respected by its customers.

Page 9 (64)

The article discusses the importance of developing adequate capacity for third-party compliance assessment of high-risk artificial intelligence systems, given their complexity and associated risks. However, considering the current experience of pre-market professional certifiers in the field of product safety and the different nature of the risks involved, it is deemed appropriate to limit, at least in an initial phase of application of this Regulation, the scope of third-party compliance assessment for high-risk AI systems that are not linked to products.

Therefore, the compliance assessment of such systems should be carried out, as a general rule, by the provider under its own responsibility. The only exception is for AI systems intended to be used for remote biometric identification of individuals, or AI systems intended to make inferences about the personal characteristics of natural persons based on biometric data or based on biometrics, including emotion recognition systems. For these systems, the involvement of a notified body in the compliance assessment should be foreseen, as long as they are not prohibited.

Impact for MorphCast

In the context of this article, MorphCast, as a provider of AI-based emotion recognition technologies, could be impacted in several ways.

On the one hand, the requirement for a compliance assessment by a notified body could entail additional costs and time, potentially representing a disadvantage if these resources are significant.

On the other hand, if MorphCast successfully demonstrates compliance with regulatory requirements, it could indeed benefit from this. Passing a third-party compliance assessment could serve as a “seal of approval” that increases customer and market trust in MorphCast’s product. This could in turn facilitate access to new markets or sectors and could distinguish MorphCast from competitors who are unable to demonstrate the same compliance.

In any case, how the regulation will impact MorphCast will depend on multiple factors, including the specific details of the regulation, the exact nature of MorphCast’s technologies, and how the company chooses to respond to these challenges and opportunities.

Page 127 (26c)

The article expresses serious concerns about the scientific basis of AI systems aimed at detecting emotions, physical or physiological characteristics such as facial expressions, movements, pulse rate, or voice. Emotions or expressions of emotions and their perceptions vary greatly between cultures and situations, and even within a single individual. Among the main deficiencies of such technologies, there are limited reliability, lack of specificity, and limited generalizability. Questions of reliability arise and, as a result, greater risks of abuse, especially when the system is used in real-life situations related to law enforcement, border management, the workplace, and educational institutions. Therefore, the marketing, commissioning, or use of AI systems intended to be used in these contexts to detect individuals’ emotional state should be prohibited.

Impact for MorphCast

If the proposal becomes law as formulated, it could have a significant impact on MorphCast’s operations. The company might have to face limitations in the use of its technologies in certain contexts, such as law enforcement, border management, the workplace, and educational institutions. This could require MorphCast to review its product and market strategies to adapt to the new rules and limitations.

AI Act: what to do

Given the nature of the proposed regulation, we should continue the strategies put in place since January 2023 and consider the following actions:

Continuous monitoring of the legislative landscape: Keep an eye on legislative and regulatory developments to ensure that MorphCast remains in compliance with any new regulations or legislation. This can also help the company anticipate and prepare for any future changes. Pay particular attention to meeting the requirements that will be introduced for conformity assessment by notified bodies.

Adaptation to markets outside the mentioned hazardous contexts and adaptation of technology within those mentioned contexts. If regulation restricts the use of some MorphCast products in specific contexts like law enforcement, border management, workplaces, and educational institutions, the company should turn to other markets and applications for its technologies and aim to develop technologies for use outside the high-risk contexts. For example, entertainment, test groups in protected environments, advertising, healthcare, and others. In addition to this, the company should adapt its technology by eliminating emotion tracking in contexts deemed dangerous, replacing it with tracking physical presence and degree of attention to content, for example, in education. Exclude law enforcement and border protection personnel from possible use of its technologies.

Communication and transparency: It is important to clearly communicate the benefits and reliability of MorphCast’s emotion recognition solutions to existing and potential customers, as well as to regulators. Transparency about the reliability and validity of MorphCast technologies can help build trust and address any concerns.

Review of current technologies: Check if the current AI solutions for emotion recognition meet the reliability, specificity, and generalizability criteria indicated in the regulation. If there are areas where current technologies are not up to par, it will be necessary to invest in research and development to address these shortcomings.

Conclusions: AI Act and MorphCast

Despite the challenges posed by the new regulation, there are many opportunities for MorphCast in the field of AI-based emotion recognition as long as the company is sensitive to transparency, sustainability, human rights defense, ethics, and responsibility. The company has been investing for months in this direction and information about the progress made in this regard can be found summarized and with hyperlinks on the Mission & Company Social Responsibility page.

I would like to remind you that MorphCast holds two specific patents related to interactive videos that respond in real time to the user’s emotions. This vision of emotional interactivity has been our goal from the beginning, which is why we filed the patent applications in 2014, which were later granted, and developed the Emotion AI Interactive Video Platform.

This platform, except in cases of extremely manipulative and distorted use, does not have characteristics that would make it fall within the scope of application of the regulation in question. Indeed, I want to stress that MorphCast’s Emotion AI Interactive Video Platform is not just a sophisticated emotion recognition technology. Based on cutting-edge technologies derived from the video game industry, this platform is the result of years of research and significant investment by our company. Today, it holds great potential for future development. More than a simple Emotion AI tool, the platform is an innovative tool for creating interactive media that reacts to the user’s emotions. This feature makes it particularly suitable for applications in the advertising and entertainment sectors, areas which, by their nature, are excluded from the scope of the regulation in question. Therefore, while considering the importance of regulation in terms of emotion recognition, the value and potential of our technology goes well beyond this aspect.

Share on:

Do you have great ideas on how MorphCast can help you reach your goals?

Find the right product MorphCast Facial Emotion AI

Informations about
the Author

Stefano Bargagni profile pic
Stefano Bargagni

Internet serial entrepreneur with a background in computer science (hardware and software), Stefano codified the e-commerce platform and founded the online retailer CHL Spa in early 1993, one year before Amazon. He is the Founder and CEO of MorphCast.