Efficiently Explained: Leveraging the SEEV Cognitive Model for Optimal Explanation Delivery

Open Access
Article
Conference Proceedings
Authors: Akhila BairyMartin Fränzle

Abstract: It is inherent to autonomous systems that they exhibit very complex behaviour and that these complex and flexible patterns of behaviour are in general less comprehensible and foreseeable to humans interacting with the systems. It is generally accepted wisdom that suitable explanations can help humans to understand the functioning of these systems. This, in turn, enhances safety, trust, and societal acceptance through meaningful interaction. Our algorithmic approach starts from the observation that the design of explanations has two essential dimensions to it, namely, content on the one hand and frequency and timing on the other. While there has been extensive research on the substance of explanations, there has been comparatively limited exploration into the precise timing details of explanations. Existing studies on explanation timing often focus on broad distinctions, such as delivering explanations before, during, or after the use of the system. Regarding Autonomous Vehicles (AVs), studies indicate that occupants generally prefer receiving an explanation prior to the occurrence of an autonomous action. However, extended exposure and use of a specific AV may likely diminish the necessity for explanations. Since understanding the explanations can add to (cognitive/mental) workload, this observation suggests the importance of optimising both the frequency –—skipping explanations when unnecessary to minimise workload—– and the precise timing of explanations, delivering them when they offer the maximum reduction in workload. The interesting fact here is that additional mental workload for the passengers can be caused both by providing and by skipping an explanation: Any explanation that is presented requires cognitive processing for its comprehension, even when its content is considered redundant by the addressee (e.g. due to the explanation content already being familiar to the passenger) or is not memorised (e.g. when an early explanation becomes superimposed by successive events due to the limited capacity of working memory). In contrast, a skipped explanation may prompt the passenger to actively scan the environment for potential cues (e.g. to understand the reasons for an unfamiliar action of the AV) and such an attention strategy induces cognitive workload itself. Concerning the latter effect, Kantowitz has investigated the relation between attention and mental workload and concluded that even simple models of attention are sufficient to predict the mental workload. In this work, we develop a probabilistic reactive game model of mental workload and the impact of explanations on it. It consists of a workload model based on SEEV as a probabilistic component modelling the human and the self-explaining AV function as the other player. The resulting 1.5-player game or Markov Decision Process facilitates to automatically synthesize a rational reactive strategy which will present explanations to the human only when beneficial and then at the optimal time, thereby minimising the cognitive workload of the human.

Keywords: Autonomous Vehicles, Explanation Timing, Reactive Game Theory, Attention Model, Human-Machine-Interaction

DOI: 10.54941/ahfe1005221

Cite this paper:

Downloads
2
Visits
28
Download