Trust in an Autonomous Agent for Predictive Maintenance: How Agent Transparency Could Impact Compliance

Open Access
Article
Conference Proceedings
Authors: Loïck SimonPhilippe RauffetClément GuérinCédric Seguin

Abstract: In the context of Industry 4.0, human operators will increasingly cooperate with intelligent systems, considered as teammates in the joint activity. This human-autonomy teaming is particularly prevalent in the activity of predictive maintenance, where the system advises the operator to advance or postpone some operations on the machines according to the projection of their future state. Like in human-human cooperation, the effectiveness of cooperation with those autonomous agents especially depends on the notion of trust. The challenge is to calibrate an appropriate level of trust and avoid misuse, disuse or abuse of the recommending system. Compliance (i.e. positive response of the operator on advice from an autonomous agent) can be interpreted as an objective measure of trust as the operator relies on the advice from the autonomous agent. This compliance is also based on the risk perception of the situation as the operator assesses the risk and the benefits of advancing or postponing an operation. A way to calibrate the trust and enhance risk perception is to use the transparency concept. Transparency has been defined as an information during a human-machine interaction that is easy to use with the intent to promote the comprehension, the shared awareness, the intent, the role, the interaction, the performance, the future plans and the reasoning process. This research will focus on two aspects of the transparency concept : the reliability of the autonomous agent ; the outcomes linked to the advice of the autonomous agent. The objective of this research is to understand the effect of the autonomous agent transparency on human trust after an advice from an autonomous agent (here an AI for predictive maintenance) for a more or less risky situation. Our hypothesis is that transparency will impact compliance (H1: Risk transparency will decrease compliance ; H2: Reliability transparency will increase compliance ; H3: Full transparency will decrease compliance)For this experiment we recruited participants to complete decision situations (i.e. accept or deny a proposition, from a predictive maintenance algorithme, of advancing or postponing a CMMS maintenance). A software for predictive maintenance in maritime context was used to address those situations. During this experiment, agent transparency level is manipulated by displaying information related to agent reliability and to situation outcomes, separately or in combination. This agent transparency is mixed with situation complexity (high or low) and the type of advice (advancinc or postponing the maintenance interventions). Age, gender, profession and affinity for the use of technology are assessed for control variables. As the situation represents risk taking, a scale for propensity of risk taking is also used. Trust (subjective and objective), risk perception and mental workload are measured after each situation. As a final question, the participant gives the main information he used to make his choice for each experimental setting.

Keywords: trust, human-robot interaction, compliance, transparency

DOI: 10.54941/ahfe1001602

Cite this paper:

Downloads
233
Visits
623
Download