Designing Persuasive Interactions with Pet-Type Virtual Agents: Effects of Emotion and Context in Mixed Reality
Open Access
Article
Conference Proceedings
Authors: Kaoru Sumi, Rio Harada
Abstract: Persuasive technology research has increasingly examined how computational systems can encourage behavioral change, with applications ranging from health management to education and sustainability. While many approaches have focused on text-based messages, visual prompts, or gamification techniques, a growing body of work emphasizes the role of emotionally expressive agents that interact with users in more natural and embodied ways. Pet-type agents in particular offer unique potential, as their familiar and socially accepted forms can elicit empathy and trust. In everyday life, people readily interpret the intentions of pets through gestures such as approaching, barking, or pointing with their gaze, making this a promising model for persuasive design. However, the combined effects of emotions and specific action strategies on persuasion remain underexplored, especially in immersive environments.Building on our prior findings that sadness expressed by a four-legged agent could effectively promote compliance while happiness and anger were often misinterpreted, this study investigates how emotional expressions and behavioral cues interact to influence persuasion in a mixed reality (MR) environment. We developed a virtual pet dog that combined four types of emotional expression—sadness, happiness, anger, and neutral—with three categories of behavioral action: attention calling (e.g., approaching, barking, making eye contact), guiding (moving toward a target location), and pointing (alternating gaze between an object and its destination). These combinations were applied across a variety of everyday contexts, including pet-related tasks such as feeding and tidying toys, non-pet-related tasks such as reading, putting away books, and waste disposal, behaviors to be limited such as smartphone use, and emergency warnings such as moving outside a room. Two experimental conditions were compared: immersive MR interaction using HoloLens 2 and video-based presentation of the same persuasive behaviors.The results revealed that combinations such as sadness with pointing were perceived as supportive and effective in encouraging compliance, while anger with guiding sometimes evoked discomfort. Video conditions achieved higher success rates for visually straightforward tasks such as reading or feeding, whereas MR conditions highlighted the importance of interactivity, as participants expected the agent to respond contingently to their actions. Subjective reports indicated that MR participants viewed the agent less as a visual prompt and more as a social partner, leading to expectations for dialogue and responsiveness.These findings suggest that persuasive design in MR requires not only appropriate emotion–action pairings but also mechanisms for interactive responsiveness. By clarifying the role of emotional and behavioral cues in daily contexts, this study contributes to human-centered design by providing guidelines for persuasive agents that support habit improvement in everyday life. The implications extend to applications in education, healthcare, and eldercare, where virtual companions may offer scalable, engaging, and socially acceptable means of encouraging positive behaviors.
Keywords: Human-Centered Design, Persuasive Technology, Virtual Agents, Mixed Reality, Emotion and Behavior Design
DOI: 10.54941/ahfe1006912
Cite this paper:
Downloads
12
Visits
48


AHFE Open Access