• Home
  • Subscribe
  • About
  • Privacy Policy
  • Disclaimer
Psychology of Selling
Psychology of Selling

Consumers prefer emotionally intelligent AI, but not for guilty pleasures

by Eric W. Dolan
December 10, 2025
Share on FacebookShare on Twitter

Artificial intelligence has rapidly moved from a backend tool to a frontline customer service agent. During recent major retail events, AI-driven chatbots managed huge surges in site traffic and helped millions of consumers find deals. As these systems advance, companies are racing to make them sound more human. The goal is often to create a seamless interaction where the machine understands not just what the customer types, but what the customer feels or intends.

However, the assumption that “smarter is better” might not always apply to sales. While consumers generally appreciate efficiency, their comfort level with a machine that claims to understand their internal thoughts is less clear. This is particularly true when the AI begins to offer advice based on social cues rather than simple data matching.

A research team recently investigated this dynamic. They sought to determine how an AI’s ability to infer human mental states influences a consumer’s willingness to buy a recommended product. Their findings, published in the journal Psychology & Marketing, suggest that while advanced social intelligence in AI generally boosts sales, it can backfire depending on what product is being sold.

Psychology of Selling
Sign up for our free weekly newsletter for the latest insights.

Investigating the “Theory of Mind”

The investigation was led by Tianyu Liu, Jingyi Yang, and Kexin Li from Durham University Business School, alongside Yuanyi Xu from Southampton Business School. The team identified a gap in current marketing literature. Previous studies focused heavily on how people perceive the “mind” of a machine, but few examined the AI’s actual performance of social intelligence.

To address this, the researchers focused on a concept called “Theory of Mind” (ToM). In human psychology, this refers to the ability to attribute mental states—such as beliefs, intents, desires, and emotions—to oneself and others. It is the cognitive skill that allows a person to understand that another individual has a perspective different from their own.

The researchers wanted to know what happens when an AI displays this capability. They relied on a framework known as Expectancy Violation Theory. This theory suggests that people enter interactions with specific expectations. When an interaction meets or exceeds those expectations in a positive way, the outcome is usually favorable. When expectations are violated negatively, the interaction suffers.

The team proposed that an AI with high ToM capabilities would generally exceed user expectations by acting more like a perceptive human partner. They hypothesized that this would create a sense of “social presence,” or the feeling of interacting with a real entity. This feeling would then lead to higher acceptance of the AI’s suggestions. However, they also suspected that the type of product—specifically whether it was a “virtue” product (healthy, long-term benefit) or a “vice” product (indulgent, immediate gratification)—would change how consumers reacted to this intelligence.

Testing the AI in Social Scenarios

The researchers designed a series of six experiments to test these ideas. They recruited participants from both China and the United States to ensure the findings applied across different cultures.

In the first study, participants imagined shopping for an outfit for a party. They read a scenario where they consulted a friend and an AI assistant. The friend suggested a bold style, but the user preferred something conservative. In the “Low ToM” condition, the AI simply recommended bold outfits based on the friend’s suggestion.

In the “High ToM” condition, the AI recognized the user’s hesitation. It stated that it sensed the user was not interested in the bold style and was likely agreeing with the friend just to be polite. The AI then recommended a modest outfit that aligned with the user’s true preference. The analysis showed that participants were significantly more likely to accept the recommendation from the AI that demonstrated this social understanding.

Measuring the Feeling of Presence

The subsequent experiments investigated why this preference occurred. In scenarios involving restaurant choices and family trip planning, the researchers measured the participants’ sense of “social presence.”

For the trip planning scenario, participants had to choose between a quiet location they preferred and a lively location their family wanted. The High ToM AI acknowledged the user’s stress and need for relaxation while still finding a compromise that suited the children. The Low ToM AI simply suggested the lively place.

The data revealed a chain of events. When the AI displayed high ToM, participants rated it as having a higher social presence. They felt they were interacting with a distinct, intelligent social entity. This increased perception of social presence was directly linked to a higher intention to visit the recommended destination. The ability to “read” the room made the AI feel more real, which made its advice more persuasive.

The Role of Product Type

The researchers then introduced a variable to see if this effect held true for all purchases. They set up experiments involving food delivery, distinguishing between “virtue” products and “vice” products. Virtue products were defined as those offering long-term benefits but lower immediate gratification, such as a low-fat grilled chicken breast. Vice products were defined as those offering immediate pleasure but potential long-term costs, such as fried chicken and sugary milk tea.

Participants interacted with an AI assistant via text or audio clips. In the virtue condition, the AI acted as a partner. It acknowledged the user’s health goals and recommended the healthy option. Here, the High ToM AI was successful. Consumers accepted the advice and bought the healthy food.

In the vice condition, the results flipped. The High ToM AI acknowledged that the fried chicken was tempting but reminded the user of the high calorie count and their weight management goals. It attempted to guide the user away from the indulgence.

The analysis showed that in these vice scenarios, consumers rejected the High ToM AI. They preferred the Low ToM agent, which simply processed the request for fried chicken without offering judgment or normative guidance.

Understanding the Consumer Pushback

The researchers found that the mechanism driving this rejection was a misalignment of roles. When consumers buy virtue products, they often engage in self-regulation. They view the AI as a partner helping them achieve a long-term goal. The High ToM AI fits this role perfectly by offering support and understanding.

When consumers buy vice products, their motivation is immediate gratification and autonomy. They do not want moral guidance or a reminder of the long-term costs. They expect the AI to act as a servant—compliant, neutral, and non-judgmental.

When the High ToM AI intervened with “understanding” advice about calories and health, it violated the user’s expectation of autonomy. It was perceived as intrusive rather than helpful. In these moments, the sophisticated social intelligence that worked well for the trip planning and the healthy food became a liability. The feeling of social presence did not lead to acceptance because the social entity was perceived as annoying or controlling.

Validating the Findings

To ensure these results were not due to the specific food items, the researchers conducted a final study using a fictitious chocolate brand. They manipulated the product perception using advertising. One group saw ads highlighting the chocolate’s health benefits (virtue), while the other saw ads highlighting its indulgent taste (vice).

The results confirmed the previous findings. When the chocolate was framed as a healthy choice, the High ToM AI sold it best. When the chocolate was framed as an indulgence, the Low ToM AI was more effective. The researchers also ruled out other possibilities, such as the AI simply seeming more accurate or empathetic. The key factor remained the interaction between the AI’s social presence and the nature of the product.

Implications for Business Strategy

These findings offer specific guidance for companies deploying AI agents. The data suggests that investing in highly emotionally intelligent AI is not always the correct strategy. It depends entirely on the context of the interaction.

For industries focused on well-being, finance, or education, an AI with high ToM capabilities is beneficial. In these “virtue” contexts, consumers look for guidance and partnership. An AI that can infer anxiety, hesitation, or long-term goals will likely increase user engagement and trust.

Conversely, businesses in sectors driven by immediate pleasure—such as fast food, gaming, or entertainment streaming—should exercise caution. In these “vice” contexts, consumers often prefer a frictionless, servant-like experience. An AI that attempts to “understand” the user too deeply or offers unsolicited guidance may trigger resistance. A simpler, more mechanical interaction that respects the user’s autonomy is likely to yield better sales results.

Directions for Future Inquiry

The study opens several new avenues for research. The current experiments relied primarily on text and audio interactions. It remains to be seen how these dynamics play out with visual avatars or in virtual reality environments.

Additionally, the study looked at single interactions. Future research could explore how these relationships develop over time. It is possible that a High ToM AI might eventually earn the trust required to make recommendations in vice contexts without causing offense.

Finally, the voice characteristics used in the audio study were limited. Investigating how different vocal tones, accents, or genders influence the perception of Theory of Mind could provide further nuance for developers building the next generation of digital assistants.

Share134Tweet84Send

Related Posts

[Adobe Stock]
New Research

Active listening improves likability but does not enhance persuasion

December 10, 2025
New Research

New study maps the psychology behind the post-holiday return surge

December 8, 2025
New Research

Unlocking the neural pathways of influence

December 7, 2025
New Research

How virtual backgrounds influence livestream sales

December 6, 2025
Load More

Psychology of Selling is part of the PsyPost Media Inc. network.

  • Home
  • Subscribe
  • About
  • Privacy Policy
  • Disclaimer

Follow us

  • Home
  • Subscribe
  • About
  • Privacy Policy
  • Disclaimer