Trust Us: A Simple Model for Understanding Appropriate Trust in AI

Open Access
Article
Conference Proceedings
Authors: Kyra WisniewskiChristina TingLaura Matzen

Abstract: We address the dissonance in the formal study of trust in artificial intelligence (AI) by presenting a simple trust model that connects the two most-commonly cited definitions of trust. This dissonance can be largely attributed to the fact that expressions of trust are familiar to us, but the abstract concepts we formally study are not. To illustrate, consider what it means to trust your car navigation system. You might say that you trust your navigation system’s ability to recommend the best route during rush hour. However, when it comes down to it, you may opt to stay on your standard route. Your words express trust as an attitude, your actions express trust as an intention. While we can easily differentiate the expressions of trust in everyday life, overloading of the term ”trust” to mean both an attitude and an intention has led to a lack of precision and confusion in its formal study. We analyze the two papers most frequently cited by the community for their definitions of trust. One paper defines trust as an attitude (Lee and See, 2004), while the other defines trust as an intention (Mayer, Davis, & Schoorman, 1995). We develop a simple trust model that clearly articulates the relationship between these definitions. Simply put, trust as an attitude is weighed against perceived risk to determine trust as an intention. We also use the model to define appropriate trust in AI. A major goal of this work is to enable the design of trust experiments that manipulate and measure components of a shared model, allowing for comparison across research efforts and the building up of a consistent body of trust research. A practical implication of understanding trust is to strengthen the relationship between humans and technology.

Keywords: Trust in AI, Trust in Automation, Trust Measures, Trust, Appropriate Trust, Human-Machine Teaming, Decision Making

DOI: 10.54941/ahfe1005594

Cite this paper:

Downloads
11
Visits
64
Download