Explainability as a means for transparency? Lay users' requirements towards transparent AI
Open Access
Article
Conference Proceedings
Authors: Johanna M Werz, Esther Borowski, Ingrid Isenhardt
Abstract: With the rise of increasingly complex artificial intelligent systems (AI), their inner processes have become black boxes. The failure of some systems and the largely unregulated market of digital services have prompted governments and organs such as the EU to work on legislation for regulation. Their main requirement is that AI must be transparent for all stakeholders. While AI developers and experts have worked on interpretability and Explainability, social scientists emphasize that explainable AI is hardly understandable for lay users. The question arises as to whether the concept of Explainability can be used to create transparency for laypersons and what (additional) requirements these users might have towards transparent AI.To answer the questions, three fictitious AI apps were discussed in focus groups with n=26 participants. The apps differed in their domain and error significance to be able to identify system dependent requirements.The results indicate that lay users have different expectations and requirements for transparency in AI than technical experts: (a) previous experience with domain and system(s) strongly shape transparency demands, (b) background information beyond Explainability concepts is highly relevant for building trust, and (c) the system factor error-significance acts as a burning glass for transparency requirements.As a summary, the qualitative study shows that Explainability cannot serve as the only means of making systems transparent for lay users. Possible implications for system development are discussed. These implications apply in particular to AI that addresses lay users, i.e. non-computer experts.
Keywords: XAI, Transparent AI, Understandability, User-centered design, Democratic AI
DOI: 10.54941/ahfe1004712
Cite this paper:
Downloads
71
Visits
126