TAMPA — When a service encounter goes south, customers expect empathy. Hearing an
employee say, “I share your frustration,” can calm tensions and rebuild trust.
But new research from the ý suggests that when a chatbot
tries the same tactic, it can backfire.
A , published in , finds that empathetic responses from AI-powered service chatbots can unintentionally
worsen customer reactions.
“Empathy from a chatbot can feel intrusive and undermine trust,” said Dezhi Yin, the study’s co-author and associate professor of information systems at the USF
Muma College of Business.
Across three experiments — including interactions with a live large language model-based
chatbot — the researchers examined how customers respond when chatbots acknowledge
and mirror users’ negative emotions.
Instead of soothing customers, these empathetic chatbot messages often triggered psychological
reactance — a negative emotional response that occurs when people feel their sense
of control is threatened or their boundaries are crossed.
Customers reacted negatively to the idea that a nonhuman system could recognize and
respond to their emotions. That discomfort made the chatbot seem less competent and
reduced overall perceptions of service quality and customer satisfaction.
The findings suggest that customers hold different expectations for humans and artificial
intelligence, particularly around emotional awareness. Making chatbots more humanlike
is not always the right strategy — especially in sensitive service recovery situations.
Other co-authors of “Bots with Empathy: Reactance Against Emotion-Aware AI Agents
in Customer Service” include Elizabeth Han, assistant professor at McGill University,
and Han Zhang, professor at Hong Kong Baptist University.
News
