🔥 Land A High Paying Web3 Job In 90 Days LEARN MORE

Study Highlights Limitations in AI-Generated  Empathy

In this post:

  • AI empathy lacks keenness and is prone to biases.
  • Emotional reactions are strong, but interpretation is weak.
  • Researchers warn of potential harm and call for critical perspectives

Through a study from three universities, Cornell University, Olin College, and Stanford University, They have come to realize that AI’s capabilities of displaying empathy in conversation agents, such as Alexa and Siri, are rather limited. The findings of this study, submitted to the CHI 2024 conference, indicate that although the CAs are good at showing emotional reactions, the situation gets difficult when interpreting and exploring the users’ experience is concerned. 

Study uncovers biases and discrimination

 Using the data collected from researcher Andrea Cuadra from Stanford, this study is aimed at measuring how CAs detect and respond to different social identities among humans. Testing for 65 various identities, the research study found that CAs are inclined to categorize individuals, and the identities especially concerning sexual orientation or religion are the most vulnerable to this habit. 

CAs, the knowledge of which are incorporated in the language models (LLMs), that are trained on great volumes of human created data, may therefore have the harmful biases that are in that data they have used. It is prone to discrimination specifically,CAs self can be on the go to show solidarity towards ideologies which have negative effects on people such as Nazism. 

The implications of  automated empathy 

 It was revealed from his artificial empathy concept that the applications of it in education and the health care sector are varied. On the other hand, a great deal of emphasis on the need of humans to remain vigilant and to avoid the tilling of the problems that may arise with such advances. 

See also  Attackers are using fake Telegram groups and Twitter accounts to target crypto users with sophisticated scam tactics

As stated by the researchers, the LLMs demonstrate a high ability in the provision of emotional responses, but at the same time, they are lame or lack sufficient abilities for interpretation and exploration of user experiences. This is a downside since the UIs may not be able to fully engage with clients in deep emotional interactions beyond the one whose layers have been stripped off.

Land a High-Paying Web3 Job in 90 Days: The Ultimate Roadmap

Share link:

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Editor's choice

Loading Editor's Choice articles...
Cryptopolitan
Subscribe to CryptoPolitan