Generating Comic Instructions for Self-Explaining Ambient Systems
Open Access
Article
Conference Proceedings
Authors: Marvin Berger, Börge Kordts, Andreas Schrader
Abstract: Dynamically connecting technical components to form Ambient Systems offers a variety of opportunities in various use cases. Particularly, the flexible integration of smart objects and applications allows for solutions adaptively tailored to the needs and daily tasks of the respective users of smart environments.However, this approach also entails some challenges as the handling of such systems can be obfuscated due to the dynamic connection. Towards this end, self-explainability of involved components and dynamically generated instructions have been proposed to counteract this issue. In this paper, we present a novel comic instruction rendering engine that can generate user instructions based on the self-descriptions of all involved components. Overall, current research on self-explainability predominantly explains system behaviour and adaptation logic but pays little attention to user–system interactions or the dynamics of interconnected adaptive ensembles.The Ambient Reflection framework instead, enables the runtime generation of instructions for users in smart environments. The foundation for the instruction generation is the Smart Object Description Language (SODL). It supports a formal, hierarchical description of Smart Objects and Ambient Applications and their interactions. The framework collects self-descriptions of all involved components and merges them into an ensemble description. Interactions in SODL are structured across the levels of the Virtual Protocol Model reaching from the abstract goal level describing the objective (e.g., “start the music”) over semantic, lexical, alphabetical level down to the physical level describing the actual physical movement (e.g., “move your hand horizontally left”). Graphical illustrative media is allowed to be linked to illustrate required user actions and system responses. To generate user-tailored instructions from the SODL self-descriptions, the framework supports different rendering engines, e.g., for text or web pages. However, the framework does not provide instructions created in comic format.There has been a lot of work published about manual or semi-automated comic generation or converting existing visual media into comics. Also, generation based on artificial intelligence has become popular. However, those approaches require significant manual intervention, problem description beforehand and iteration to produce coherent and meaningful results. In this paper, we extend the Ambient Reflection framework with a Comic Rendering Engine supporting automatically generating comic instructions based on the ensemble self-description. For this purpose, the SODL descriptions are serialized using JSON. The Comic Engine then invokes a Comic Generator as a sub-process and passes the structured JSON information. The Comic Generator first transforms the data into a Comic Book Markup Language (CBML) document. This transformation flattens the nested JSON structure while preserving the essential level information (e.g., Goal, Task), resulting in a bundled data format suitable for generation.The system accesses visual and textual representations for devices, applications and interactions, which are embedded within the self-descriptions of the respective components. The textual content is adapted to fit the special needs of comic design. The actual generation of the comic elements like characters, speech bubbles and their arrangement is delegated to a web-based, open-source software package (Comicgen) and inserted into the CBML panels. To ensure a uniform look and prevent visual obstruction, objects are scaled to not exceed a maximum size. Collision avoidance strategies are involved in placing element. The final output is rendered as a PDF file that can be displayed on devices in the environment.We conducted a user study to investigate the quality of the generated comic instructions. The study focused on the comprehensibility of the generated comic tutorials and the clarity of their textual and graphical elements. The emphasis was therefore placed on the quality of the instructions themselves, rather than on the system design or the usability of the ensembles being explained. The investigation was concluded as a quasi-experimental mixed-design study with questionnaires, observation and semi-structured interviews (N=13). The laboratory wizard-of-Oz setup included a collection of interaction devices based on gestures and voice. Tasks included light and temperature control of a smart home as well as interacting with a novel in-bed application for intensive care patients using a dedicated ball-shaped interaction device. Results indicate that automatically generated comic instructions can effectively support the operation of Ambient Systems consisting of interconnected Smart Objects and applications. Feedback showed that the tutorials’ consistent structure was positively received. Additionally, the instructions were considered clear and understandable.
Keywords: Smart Object Guidance, Self-Reflection, Comic Generations
DOI: 10.54941/ahfe1007139
Cite this paper:
Downloads
1
Visits
2


AHFE Open Access