Types of Interaction-Based Information Conveyed Through LED-Based Robot Expressions
Open Access
Article
Conference Proceedings
Authors: Jounghyun Kim, Kwanmyung Kim
Abstract: LEDs have been increasingly used as expressive mediums in social robotics due to their versatility and efficiency. However, while LEDs effectively convey basic system statuses such as power, battery life, or error alerts, their potential to represent more complex forms of information remains largely unexplored. Current implementations often rely on designers’ intuition rather than a structured methodology, leading to inconsistencies and challenges in user interpretation. To address this gap, this study investigates Information types that can be conveyed through LED-based expressions and establishes a systematic framework for their design. This study categorizes LED-based signals into structured Information types that are either primary (serving as the main communication channel) or redundant (supporting other modalities). Additionally, we distinguish between referential expressions, which rely on contextual understanding, and non-referential expressions, which can be interpreted independently. By defining these categories, this research provides a foundation for enhancing LED-based communication in human-robot interaction (HRI). To build a foundation of this framework, we conducted an ideation study involving six graduate students specializing in product design and social robotics development. Participants explored how different LED design factors—On/Off states, intensity, rhythm, and color—can be manipulated to represent information. The study used a horizontally arranged LED strip to align with human perceptual tendencies, particularly those related to facial expressions and motion perception. The results identified 20 distinct Information types that can be effectively represented using LED-based expressions. These include system notifications, user responses, paralinguistic cues such as laughter or humming, gestures that mimic human movement, and affective states like happiness or surprise. Notably, the study found a direct relationship between the number of LEDs and the complexity of information representation. When the number of LEDs matched the information’s bit-level structure, users interpreted it in a discrete manner. However, when the number of LEDs exceeded the required bits, users perceived the expression holistically rather than as binary signals. For instance, to represent a concept like “water intake,” participants preferred a gradual illumination of LEDs rather than a strict binary encoding. Additionally, our findings suggest that LED-based expressions requiring contextual information for interpretation—such as ambiguous gestures or emotional states—may benefit from multimodal integration with other robotic expressions, such as motion or sound. However, certain gestures, particularly those involving multiple LEDs, were consistently recognized without external cues, suggesting that increasing the number of LEDs enhances independent interpretability. This research contributes to the field of HRI by providing a structured approach to LED-based expressions, improving their clarity, and reducing reliance on designer intuition. By integrating cognitive communication models, this study highlights the importance of aligning LED expressions with human perceptual and interpretative tendencies. Future research should focus on validating these findings through user studies and expanding the framework to incorporate dynamic and multicolor LED interactions. These insights have implications for the design of expressive robotic systems in both social and functional domains.
Keywords: LED-Based Expression, Nonverbal Communication, HRI, Interaction Design
DOI: 10.54941/ahfe1006428
Cite this paper:
Downloads
21
Visits
36