An Interactive Virtual Assistant for Flexible Just-in-Time Training
Open Access
Article
Conference Proceedings
Authors: Glenn Taylor, Jeffrey Craighead, Kortney Menefee, Logan Lebanoff, Christopher Ballinger, Stephen Mcgee
Abstract: Many situations call for a person to perform tasks in which they are not an expert, and for which the person does not need (or want) to become an expert, but still needs to perform the task in the moment. Simple examples might include home maintenance tasks like changing out a furnace filter or car maintenance tasks such as refilling the wiper fluid. Performing the task might involve consulting a manual or searching the internet for relevant material. However, even when useful content is found, it’s not always easy to refer to these sources while simultaneously performing the task. Looking back and forth between a manual and the thing that needs fixing, or turning pages or typing on a keyboard when hands are occupied with tools makes the process more difficult. It can also be challenging to visually relate a diagram in a manual to the actual system being worked on, especially for non-experts. To help address these challenges, we have been developing an Autonomous Virtual Assistant (AVA) that helps someone perform a task by walking them through step by step. AVA can be thought of as an interactive helper working over the user’s shoulder to help perform the task, using different modalities and tools including mixed reality to convey information. This work has focused on being flexible to the user and the situation, both in terms of providing helpful information to the user and getting input from the user. Given a procedure, AVA determines on the fly how to present information based on what resources are available and what content needs to be conveyed. With a mixed reality setting, this might include showing text or imagery on a virtual heads-up display, or overlaying 3D imagery on the physical object being worked on to help orient the user. To make mixed reality content readily available, the system allows for easy ways of aligning 3D virtual models to the physical objects being worked on. In a situation where speech might be the only modality available, the system will read procedure steps to the user, or help the user navigate to a needed item using speech alone. To suit the user’s situation, the system also affords the user different interaction modalities, such as touching virtual buttons, pointing to objects in the real world, or using voice only in a heads-up, hands-free manner. The user moves through procedure steps and related information and asks questions if they need clarification or more detail. Example interactions include initiating a session (“Okay AVA, start the wiper refill procedure”), requesting visual material related to a task (“What does that look like?” or “Show me that image”), or asking for more information on procedure step (“How do I do that?”). AVA uses the content and resources at hand to help the user complete the procedure. In this paper, we describe the motivation for AVA, the system design, its application in some real-world tasks, user feedback from hands-on evaluations, and future directions.
Keywords: Just in time training, mixed reality, virtual assistant, artificial intelligence, natural interaction, interactive systems, multi-modal interaction
DOI: 10.54941/ahfe1005755
Cite this paper:
Downloads
5
Visits
46