Guidelines and Policies for responsible use of Emotion AI

Last modified February 11, 2023

Responsibility as a value creation tool

Corporate social responsibility can help businesses improve their reputation and build positive relationships with communities, manage risk, and create greater long-term stability. In general, accountability can be used to build trusting relationships with stakeholders and to create a more sustainable and positive business environment.

It is important to consider the ethical and social implications of using Emotion AI

We of MorphCast promote an ethical use of Emotion AI, including respecting individuals’ autonomy and dignity. It should not be used to manipulate or deceive individuals or to cause harm.

We adopt appropriate governance and oversight in place to ensure that our service of Emotion AI is used responsibly. This includes establishing guidelines and policies for its use, as well as having mechanisms in place to monitor and enforce compliance with these guidelines also from our customers. 

This is the purpose of the guidelines listed below. Some of them are completely under our control. We are directly involved in them and we make all our effort to follow and update them constantly. We list them in this guide for completeness of information. However, most of the guidelines are under the control of our clients. With regard to this last point, we are constantly committed to spreading a culture of conscious use of Emotion AI that respects human rights and all privacy laws, as well as common sense and civil commitment.

AI Ethics in Action

In Morphcast, we are very sensitive about ethics and responsibility. We continuously strive to explain this, and we have ongoing actions and processes in place to ensure our clients and their users use an AI system as securely as possible.

Please refer to this documentation to learn more about: Operation, Training Data, and Capabilities; Risks & Limitations; Measures to Ensure Security and Reliability; and the Damage or Discrimination Complaint System.

Generally speaking Emotion AI can be used to enhance the interaction between a system and a user by allowing the system to respond in some way to the emotions of the user. Here are some more detailed ways in which Emotional AI could be used with advantage by industry and individuals:

  1. In the entertainment industry to create more immersive and interactive experiences for users. For example, Emotion AI can be used to make personalized recommendations for entertainment content based on the emotions of the user. In a more innovative way, can also be used to create virtual reality or augmented reality experiences that respond to the emotions of the user.
  2. In the advertising industry to create more personalized and targeted advertisements based on the emotions of the audience thus avoiding tracking the online behavior and depositing cookies in the browser to be bombarded with often outdated advertising messages generated by programmatic re-marketing. Another example: Emotion AI can be used to analyze the emotional response of viewers to different advertisements and identify which advertisements are most effective at eliciting certain emotions. This information can then be used to tailor future advertisements to better appeal to the emotions of the target audience.
  3. Improving customer service. Emotion AI can help customer service agents better understand the emotions of their customers. This could allow them to provide more personalized and effective support. For example, an Emotion AI system could be used in a customer service chatbot to identify when a customer is feeling frustrated or upset, and provide appropriate responses to help resolve the issue. Emotion AI can also be used in virtual assistants or personal assistants to provide personalized recommendations or responses based on the emotions of the user.
  4. Enhancing virtual assistants. Emotion AI can make virtual assistants more human-like, by allowing them to recognize and respond to the emotional state of the user.
  5. Improving mental health: Emotion AI can help identify individuals who may be at risk of developing mental health issues. It can also provide them with appropriate support and resources.
  6. Enhancing education: Emotion AI can help teachers better understand the emotional state of their students. This may allow them to tailor their teaching approaches to better meet the needs of their students. It can also measure the student’s degree of attention to a given content. Not only that, but he can also understand when the student is perplexed or has the typical illumination of someone who has understood a concept, the light bulb turns on in his brain. Based on this emotional data, the course can become truly interactive and personalized like a human tutor well trained in his craft.
  7. Train to have positive expressions in job interviews and in every interaction with others. A positive attitude on the face facilitates the interlocutor’s empathy. Helping novice actors understand which expression is best to show the various moods of the characters, for example.
  8. Improving decision-making. Emotion AI can help individuals and organizations make more informed decisions by taking into account the emotional state of those involved.
  9. In videoconferencing tools to enhance the communication and interaction between participants. For example, Emotion AI can be used to analyze the facial expressions and body language of participants to identify their emotional states, and provide real-time feedback or prompts to facilitate better communication. Emotion AI can also be used to create personalized recommendations or responses based on the emotions of the participants.

MorphCast Responsible Use of Emotion AI GUIDELINES

Obtain explicit and informed consent from individuals before collecting or processing their emotional data. 

Obtaining explicit and informed consent from individuals before collecting or processing their emotional data is an important part of responsible use of Emotion AI. This means that you need to clearly communicate the purpose of the Emotion AI, how the data will be used, and the potential risks and benefits to the individual. It is also important to give the individual the opportunity to opt-in or opt-out of participating in the collection or processing of their emotional data. 

Obtaining explicit and informed consent is important because it ensures that individuals are aware of how their emotional data is being used and that they have the ability to make an informed decision about whether or not to participate. It also helps to protect the privacy of individuals and ensures that their emotional data is only used for the purpose for which it was collected.

In order to obtain explicit and informed consent, it is important to provide clear and concise information to individuals about the Emotion AI and its purpose, as well as any potential risks or benefits. It is also important to give individuals the opportunity to ask questions and get more information before making a decision.

Ensure that Emotion AI systems are designed and used in a way that respects the privacy and dignity of individuals.

It is important to design and use Emotion AI systems in a way that respects the privacy and dignity of individuals. Here are some recommendations for doing so:

  1. Obtain explicit consent from individuals before collecting or using their personal data for Emotion AI systems. This includes clearly explaining the purpose of the data collection and how the data will be used.
  2. Protect the privacy of individuals by securely storing and handling personal data in accordance with relevant laws and regulations.
  3. Avoid using Emotion AI systems to make decisions that could have significant impacts on individuals, such as hiring or promotion decisions, without providing a transparent and fair process for individuals to challenge or appeal these decisions.
  4. Ensure that Emotion AI systems are designed and used in a way that does not discriminate against or unfairly disadvantage any particular group of individuals.
  5. Consider the potential risks and unintended consequences of using Emotion AI systems, and implement appropriate safeguards to mitigate these risks.
  6. Regularly review and update the design and use of Emotion AI systems to ensure that they continue to respect the privacy and dignity of individuals.
  7. Be transparent about the capabilities and limitations of Emotion AI systems, and communicate these clearly to users.
It is important to be transparent about the capabilities and limitations of Emotion AI systems and to communicate these clearly to users. This can help users understand the potential benefits and limitations of using these systems and make informed decisions about whether and how to use them. 

Here are some recommendations for being transparent about the capabilities and limitations of Emotion AI systems:

  1. Clearly explain the purpose and intended use of the Emotion AI system to users.
  2. Provide information about the data sources and methods used to train and evaluate the Emotion AI system, including any assumptions or biases that may be present in the data or algorithms.
  3. Disclose any known limitations or weaknesses of the Emotion AI system, such as its accuracy or ability to handle certain types of inputs or contexts.
  4. Communicate the potential risks and benefits of using the Emotion AI system to users, and provide guidance on how to use the system safely and effectively.
  5. Regularly review and update the information provided about the capabilities and limitations of the Emotion AI system to ensure that it is accurate and up-to-date.
  6. Consider the potential impacts of Emotion AI on vulnerable or marginalized groups, and take steps to mitigate any negative consequences.
It is important to consider the potential impacts of Emotion AI on vulnerable or marginalized groups, and to take steps to mitigate any negative consequences. Emotion AI systems have the potential to perpetuate or amplify existing biases and inequalities, and it is important to ensure that these systems are designed and used in a way that is fair and inclusive.

Here are some recommendations for mitigating the potential negative impacts of Emotion AI on vulnerable or marginalized groups:

  1. Conduct an equity impact assessment to identify any potential negative impacts on vulnerable or marginalized groups, and take steps to mitigate these impacts.
  2. Ensure that the data used to train and evaluate Emotion AI systems is diverse and representative of the population that the system will be used with.
  3. Regularly review and update the design and use of Emotion AI systems to ensure that they are fair and inclusive.
  4. Provide clear guidelines and training to users of Emotion AI systems on how to use the systems in a way that is fair and respectful of all individuals.
  5. Consider implementing mechanisms for individuals to challenge or appeal decisions made by Emotion AI systems that may have negative impacts on them.
  6. Regularly assess and review the performance and outcomes of Emotion AI systems to ensure they are operating as intended and not causing harm.
It is important to regularly assess and review the performance and outcomes of Emotion AI systems to ensure they are operating as intended and not causing harm. This can help identify any potential problems or issues with the system and allow for corrective action to be taken.

Here are some recommendations for conducting regular assessments and reviews of Emotion AI systems:

  1. Set clear performance metrics for the Emotion AI system and track these metrics over time to ensure that the system is meeting its intended goals.
  2. Monitor the outcomes of the Emotion AI system to identify any unintended consequences or negative impacts on individuals or groups.
  3. Conduct regular audits or assessments of the Emotion AI system to evaluate its performance and identify any issues or concerns.
  4. Consider implementing mechanisms for users of the Emotion AI system to provide feedback or report any problems or concerns.
  5. Review and update the design and use of the Emotion AI system as needed to address any identified issues or concerns.
  6. Develop and implement robust safeguards to prevent bias and discrimination in the development and deployment of Emotion AI systems.
It is important to develop and implement robust safeguards to prevent bias and discrimination in the development and deployment of Emotion AI systems. Bias and discrimination can occur at various stages in the development and deployment of these systems, from the data used to train and evaluate the systems to the ways in which the systems are used in practice.

Here are some recommendations for preventing bias and discrimination in Emotion AI systems:

  1. Use diverse and representative data to train and evaluate Emotion AI systems to ensure that they are not biased against certain groups of individuals.
  2. Regularly review and assess the data used to train and evaluate Emotion AI systems to identify and mitigate any biases or discriminatory patterns.
  3. Use fairness metrics and tools to evaluate the performance of Emotion AI systems and identify any potential biases or discriminatory outcomes.
  4. Provide training and guidance to users of Emotion AI systems on how to use the systems in a fair and unbiased manner.
  5. Consider implementing mechanisms for individuals to challenge or appeal decisions made by Emotion AI systems that may be biased or discriminatory.
  6. Regularly review and update the design and use of Emotion AI systems to ensure that they are fair and unbiased.
  7. Ensure that individuals have access to appropriate redress mechanisms if they feel that their emotional data has been misused or their privacy has been violated.
It is important to ensure that individuals have access to appropriate redress mechanisms if they feel that their emotional data has been misused or their privacy has been violated. This can help individuals feel that their rights and interests are being respected and protected, and can also help to build trust in Emotion AI systems.

Here are some recommendations for providing appropriate redress mechanisms for individuals:

  1. Clearly explain to individuals how their emotional data will be collected, used, and protected, and provide information about their rights and options for challenging or appealing decisions made using their data.
  2. Implement mechanisms for individuals to report any concerns or complaints about the misuse or abuse of their emotional data.
  3. Establish procedures for investigating and responding to reports of emotional data misuse or privacy violations, and provide appropriate remedies for individuals who have been affected.
  4. Consider implementing independent oversight or review mechanisms to ensure that emotional data is being used in a responsible and ethical manner.
  5. Regularly review and update the redress mechanisms in place to ensure that they are effective and responsive to the needs of individuals.
  6. Provide training and resources to help users understand and effectively use Emotion AI systems.
It is important to provide training and resources to help users understand and effectively use Emotion AI systems. This can help ensure that users are aware of the capabilities and limitations of these systems, and can use them in a way that is safe and effective.

Here are some recommendations for providing training and resources to help users understand and effectively use Emotion AI systems:

  1. Clearly explain the purpose and intended use of the Emotion AI system to users, and provide information about its capabilities and limitations.
  2. Provide training and guidance to users on how to use the Emotion AI system safely and effectively, including any best practices or recommended procedures.
  3. Make available documentation and other resources that users can refer to when using the Emotion AI system, such as user manuals or FAQs.
  4. Consider offering ongoing support or assistance to users of the Emotion AI system, such as through a help desk or online support forum.
  5. Regularly review and update the training and resources provided to users to ensure that they are accurate and up-to-date.
  6. Work with industry, academia, and other stakeholders to advance the ethical and responsible use of Emotion AI.
It is important to work with industry, academia, and other stakeholders to advance the ethical and responsible use of Emotion AI. This can help ensure that Emotion AI systems are developed and used in a way that is fair, transparent, and respectful of the privacy and dignity of individuals.

Here are some recommendations for working with industry, academia, and other stakeholders to advance the ethical and responsible use of Emotion AI:

  1. Engage with industry and academia to share knowledge and expertise on the ethical and responsible use of Emotion AI.
  2. Participate in industry associations, professional societies, and other organizations that focus on the ethical and responsible use of Emotion AI.
  3. Collaborate with other stakeholders, such as civil society organizations and government agencies, to identify and address potential ethical and social issues related to Emotion AI.
  4. Work with industry, academia, and other stakeholders to develop and promote best practices, guidelines, and standards for the ethical and responsible use of Emotion AI.
  5. Regularly review and update the approaches and strategies for working with industry, academia, and other stakeholders to advance the ethical and responsible use of Emotion AI.

These guidelines are just the beginning of a work that we will continue to evolve with a view to using emotional AI in an always conscious way and for the benefit of people, preventing, limiting and controlling the risks of inappropriate, abusive or even criminal use.

We are a member of the European AI Alliance and in drafting these guidelines, we took into consideration the European Union’s ethical guidelines for trustworthy AI and the Proposal for a regulation establishing harmonized rules on artificial intelligence available at this link: Proposal for a Regulation laying down harmonised rules on artificial intelligence.

Morphcast and ReD OPEN, a spin-off of the University of Milano-Bicocca, join forces for sustainable, compliant, and socially responsible AI innovations.