Abstract
Studies on emotion perception often require stimuli that convey different emotions. These stimuli can serve as a tool to understand how agents react to different circumstances. Although different stimuli have been commonly used to change the emotions of an agent, it is not clear how to measure the emotional state of an agent.
This paper suggests a new method for measuring the emotional state among interacting agents in a given environment. We present the modeling of an adaptive emotional framework that takes into account agent emotion, interaction and learning process. For solving the problem, we employ a non-cooperative game theory approach for representing the interaction between agents and a Reinforcement Learning (RL) process for introducing the stimuli to the environment. We restrict our problem to a class of finite and homogeneous Markov games. The emotional problem is ergodic: each emotion can be represented by a state in a Markov chain which has a probability to be reached. Each emotional strategy of the Markov model is represented as a probability distribution. Then, for measuring the emotional state among agents, we employ the Kullback-Leibler distance between the resulting emotional strategies of the interacting agents. It is a distribution-wise asymmetric measure, then the feelings of one player for another are relative (different). We propose an algorithm for the RL process and for solving the game is proposed a two-step approach. We present an application example related to the selection process of a candidate for a specific position using assessment centers to show the effectiveness of the proposed method by a) measuring the emotional distance among the interacting agents and b) measuring the “emotional closeness degree” of the interacting agents to an ideal proposed candidate agent.