Researchers are one step closer to achieving ’emotionally intelligent’ AI

Researchers at the Japan Advanced Institute of Science and Technology have integrated biological signals with machine learning methods to enable “emotionally intelligent” AI. According to the researchers, emotional intelligence could lead to more natural human-computer interactions.

The new study has been published in the journal IEEE Transactions on Affective Computing.

Achieve emotional intelligence

Speech and language recognition technologies like Alexa and Siri are constantly evolving, and the addition of emotional intelligence could take them to the next level. This would mean that these systems could recognize the user’s emotional states, as well as understand language and generate more empathetic responses.

“Multimodal sentiment analysis” is a group of methods that are the gold standard for AI dialogue systems with sentiment detection, and they can automatically analyze a person’s psychological state from their speech, expressions facial features, voice color and posture. They are fundamental to creating human-centric AI systems and could lead to the development of emotionally intelligent AI with “beyond-human capabilities”. These abilities would help the AI ​​understand the user’s sentiment before formulating an appropriate response.

Analyze unobservable signals

Current estimation methods focus primarily on observable information, which leaves out information in unobservable signals, which may include physiological signals. These types of signals contain a lot of valuable data that could improve sentiment estimation.

In the study, physiological cues were added to multimodal sentiment analysis for the first time. The team of researchers who undertook this study included Associate Professor Shogo Okada of Japan Advanced Institute of Science and Technology (JSAIT) and Professor Kazunori Komatani of Osaka University’s Institute of Science and Industrial Research.

“Humans are very good at hiding their feelings,” says Dr. Okada. “A user’s internal emotional state is not always accurately reflected by the content of the dialogue, but since it is difficult for a person to consciously control their biological signals, such as heart rate, it can be helpful to use them to estimate their emotional state.This could create an AI with sentiment estimation abilities beyond the human.

The team’s study involved analyzing 2,468 exchanges with a dialogue AI obtained from 26 participants. With this data, the team was able to estimate the level of pleasure felt by the user during the conversation.

The user was then asked to rate how enjoyable or boring the conversation was. The multimodal dialog dataset called “Hazumi1911” was used by the team. This dataset combines voice recognition, voice color sensors, posture detection, and facial expression with skin potential, which is a form of physiological response detection.

“Comparing all separate information sources, biosignal information was found to be more effective than voice and facial expression,” Dr. Okada continued. “When we combined the linguistic information with the biosignal information to estimate the self-rated internal state while talking with the system, the AI’s performance became comparable to that of a human.”

The new findings suggest that sensing physiological cues in humans could lead to highly emotional artificial intelligence-based dialogue systems. Emotionally intelligent AI systems could then be used to identify and monitor mental illness by detecting changes in daily emotional states. Another possible use case is in education, where they could identify if a learner is interested in a topic or bored, which could be used to modify teaching strategies.

Source link

Comments are closed.