Explainable AI Solutions for the U.S. Coast Guard Command Center: A Human-Centered Collaboration
Open Access
Article
Conference Proceedings
Authors: Audrey Haque, Anthony Lapadula, Jessamyn Liu, Sara Falkson, Karli Blanchard, Richard Coleman, Amna Greaves
Abstract: With advances in artificial intelligence (AI) comes the responsibility to ensure that deployed AI solutions are ethical, useful, and safe. Explainable AI (XAI) has drawn increasing interest from the AI research community and seeks to provide understandable descriptions of how machine learning (ML) models generate their outputs. In short, XAI allows users to peek into the incredibly complex black boxes that most ML models have become. As successful adoption of new XAI tools necessitates designing “with,” and not just “for,” this paper explores the use of human-centered, participatory design in partnership with United States Coast Guard (USCG) command center watchstanders. Our process included traditional research methods such as interviews, observation, and contextual inquiry, as well as user experience (UX) workshop research methods such as experience mapping, post-ups, affinity diagramming, forced ranking prioritization exercises, and storyboarding. Our goals were to understand the unique problems and opportunities of the USCG’s Search and Rescue (SAR) mission, collaboratively generate desirable XAI solution ideas with command center watchstanders, elicit watchstander ideas and requirements for explainability features, and prototype our ideas to better meet real-world operational needs.
Keywords: Explainable AI, XAI, Artificial Intelligence, AI, Human-Centered Design, Participatory Design, Human-Centered AI, Human-Machine Teaming, USCG
DOI: 10.54941/ahfe1005602
Cite this paper:
Downloads
35
Visits
141