Propensity Matters – An Empirical Analysis on the Importance of Trust for the Intention to Use Artificial Intelligence

Open Access
Article
Conference Proceedings
Authors: Jona KargFrank RitzPetra Maria Asprion

Abstract: There is a growing need for scientific knowledge about the extent to which the results of artificial intelligence (AI) and the effects of its use can be considered trustworthy. Accordingly, user experience can lead to trust in AI being too low or too high, which could result in its misuse. Especially as trust is considered subjective and could be seen as a heuristic, which in turn would speak in favor of the importance of trust in AI, as the underlying algorithm is not transparent to the user in so-called black-box models. In this context, the call to enhance the transparency of such models to increase trust seems contradictory. There is no common theory, but Lee and See's (2004) model of trust in automation is often used as a basis for research, since automation can be seen as the foundation of AI. However, it remains unclear whether this model can be adapted to AI. Therefore, this study investigates which factors influence trust in AI in the context of ChatGPT and how this affects the intention to use. On this basis, a conceptual path model was derived and tested using path analysis. Data were collected from 105 students using validated questionnaires. The empirical path model shows the expected positive influences, with one exception. In addition, the results emphasize that the role of the propensity to trust is central. Furthermore, the significant influence of trust on intention to use is weaker than supposed. While the results largely align with existing assumptions, they simultaneously introduce new insights.

Keywords: Artificial Intelligence, Human-AI interaction, Trust, Propensity to Trust, Intention to Use, Explainable AI

DOI: 10.54941/ahfe1006710

Cite this paper:

Downloads
12
Visits
57
Download