Explaining algorithmic decisions: design guidelines for explanations in User Interfaces

Open Access
Article
Conference Proceedings
Authors: Charlotte HaidAlicia LangJohannes Fottner

Abstract: Artificial Intelligence (AI)-based decision support is becoming a growing issue in manufacturing and logistics. Users of AI-based systems have the claim to understand the decisions made by the systems. In addition, users like workers or managers, but also works councils in companies, demand transparency in the use of AI. Given this background, AI research faces the challenge of making the decisions of algorithmic systems explainable. Algorithms, especially in the field of AI, but also classical algorithms do not provide an explanation for their decision. To generate such explanations, new algorithms have been designed to explain the decisions of the other algorithms post hoc. This subfield is called explainable artificial intelligence (XAI). Methods like local interpretable model-agnostic explanations (LIME), shapley additive explanations (SHAP) or layer-wise relevance propagation (LRP) can be applied. LIME is an algorithm that can explain the predictions of any classifier by learning an interpretable model around the prediction locally. In the case of image recognition, for example, a LIME algorithm can highlight the image areas based on which the algorithm arrived at its decision. They even show that the algorithm can also come to a result based on the image caption. SHAP, a game theoretic approach that can be applied to the output of any machine learning model, connects optimal credit allocation with local explanations. It uses Shapley values as in game theory for the allocation. In the research of XAI, explanatory user interfaces and user interactions have hardly been studied. One of the most crucial factors to make a model understandable through explanations is the involvement of users in XAI. Human-computer interaction skills are needed in addition to technical expertise. According to Miller and Molnar, good explanations should be designed contrastively to explain why event A happened instead of another event B, rather than just emphasizing why event A occurred. In addition, it is important that explanations are limited to only one or two causes and are thus formulated selectively. In literature, four guidelines to be respected for explanations are formulated: use a natural language, use various methods to explain, adapt to mental models of users and be responsive, so a user can ask follow-up questions. The explanations are often very mathematical and a deep knowledge of details is needed to understand the explanations. In this paper, we present design guidelines to help make explanations of algorithms understandable and user-friendly. We use the example of AI-based algorithmic scheduling in logistics and show the importance of a comprehensive user interface in explaining decisions. In our use case, AI-based shift scheduling in logistics, where workers are assigned to workplaces based on their preferences, we designed a user interface to support transparency as well as explainability of the underlying algorithm and then evaluated it with various users and two different user interfaces. We show excerpts from the user interface and our explanations for the users and give recommendations for the creation of explanations in user interfaces.

Keywords: Explainable Artificial Intelligence, Scheduling, Design Guidelines

DOI: 10.54941/ahfe1003764

Cite this paper:

Downloads
128
Visits
364
Download