Trust in AI and Autonomous Systems

Open Access
Article
Conference Proceedings
Authors: Elizabeth MezzacappaDominic ChengLucas HessNikola JovanovicRobert DemarcoJose RodriguezMadeline KielKenneth ShortAlexis CadyJessika DeckerMark GermarKeith KoehlerNasir JafferyLawrence D'aries

Abstract: In 2023, the Office of the Undersecretary of Defense initiated the Center for Calibrated Trust Measurement and Evaluation (CaTE) aimed at establishing methods for assuring trustworthiness in artificial intelligence (AI) systems with an emphasis on the human-autonomy interaction. As part of the CaTE effort, the DEVCOM Armaments Center’s Tactical Behavior Research Laboratory was tasked with developing standards for testing and measuring calibrated trust in AI-enabled armament systems. Qualitative and quantitative measures of trust were collected from over 80 Soldiers in table-top, force on force, simulated environments, and engineering integration events. In particular, a survey instrument, configured specifically for assessing trust in AI weapon systems, has been created for this research. Embedding with Soldiers during operational exercises using actual systems, the researchers were able to gather footage and recordings of possible human systems integration (HSI) issues. Information from this live exercise was used to configure a virtual environment experiment using the same terrain, controllers, and systems as in the live exercise. This presentation will give an overview of the research program, with the emphasis on novel HSI data collection methods.

Keywords: Artificial Intelligence, Autonomy, Trustworthy, Trust Calibration, Soldier Touch Point

DOI: 10.54941/ahfe1006376

Cite this paper:

Downloads
27
Visits
84
Download