When to Trust the Machine: A Simulation Framework for Human–AI Collaboration

Open Access
Article
Conference Proceedings
Authors: Soraya HaniAncuta MargondaiSara WilloxCindy Von AhlefeldtValentina EzcurraAnamaria Acevedo DiazNikita IslamMustapha Mouloua

Abstract: Artificial intelligence in safety-critical areas like transportation needs proper trust calibration for safe human–AI collaboration. This study explored how transparency affects trust development through a simulation of human–AI interaction in automated driving. A discrete event simulation modeled human agents interacting with an automated driving assistant at different reliability and transparency levels. Trust changed asymmetrically, decreasing three times faster after errors than it increased after corrections. Transparency was tested in four conditions: none, confidence only, rationale only, and full transparency (confidence, rationale, and uncertainty). Analysis of 24 million decisions from 24,000 runs showed significant effects of reliability and transparency on trust calibration and a notable interaction. High transparency reduced calibration error by 42.5% and improved task accuracy beyond human baseline, increased acceptance 2.4 times, and decreased overtrust and undertrust significantly. Decision latency rose slightly but remained acceptable. Time-series analyses indicated trust aligned with actual AI reliability only under transparent conditions. Transparency explained 73% of trust calibration variance, surpassing the impact of AI reliability alone. These results highlight transparency as vital for calibrated trust and safe reliance in human–AI systems, offering quantitative guidance for explainable AI design in transportation and safety-critical fields.

Keywords: Human–AI Collaboration, Trust Calibration, Automation Bias, Explainable AI (XAI), Human Factors in AI

DOI: 10.54941/ahfe1007095

Cite this paper:

Downloads
1
Visits
2
Download