Human Autonomy Teaming: proposition of a new model of trust

Open Access
Article
Conference Proceedings
Authors: Helene UnreinThéodore LetouzéJean-Marc AndréSylvain Hourlier

Abstract: The literature regarding trust between a human and a technological system is abundant. In this context, trust does not seem to follow a simple dynamic given the multiple factors that impact it: mode of communication of the system, appearance, severity of possible system failures, factors favoring recovery, etc.In this work, we propose a modeling of the dynamics of the trust of a human agent towards an autonomous system (Human Autonomy Teaming HAT) which is inspired by a hysteresis cycle. The latter reflects a delay in the effect in the behavior of materials called inertia. According to this same principle, the variation in confidence would be based on a non-linear relationship between confidence and expectation. Thus, these variations would appear as interactions occur (like a discrete variable), rather than on a continuous time scale.Furthermore, we suggest that trust varies depending on: the conformity of expectations, the previous level of trust, the duration of maintaining a good or bad level of trust, and the interindividual characteristics of the human agent.Expectations reflect the evaluation of the situation estimated by the human agent on the basis of the knowledge at its disposal and the expected performance of the system. At each confrontation with reality, if the perceived reality agrees with the expected then the expectations are consistent, otherwise they are non-compliant. Depending on the initial state of trust, these expectations will influence the variation in trust. The latter is determined through the hysteresis cycle. At both ends of the cycle, the level of trust is characterized as either calibrated trust or distrust. Indeed, confidence does not increase towards a maximum, but towards an optimal level: calibrated confidence. This is a level of confidence adapted to the capabilities of the autonomous system. Conversely, trust decreases to a level of distrust. This corresponds to the situation where the individual does not trust the system and rejects it. In our context of use, the individual is obliged to continue to interact with the autonomous system, which opens the possibility of overcoming this distrust and restoring all or part of the initial trust.We propose that maintaining this level of calibrated trust or distrust results in an inertia effect. The more trust is maintained at one of these levels, the greater the inertia. Thus, calibrated trust established over a short period of time will be more affected by non-compliant expectations than calibrated trust established over the long term.Furthermore, the evolution of trust is influenced by individual criteria. Although the model described here is generic, it can be personalized according to the predispositions of the human agent: propensity for trust, personality trait, attitudes towards technological systems, etc.The model presented is not intended to debate the nature of trust. It illustrates and explains the dynamics of trust, a key factor in the HAT relationship, both at the origin of this interaction and for the results it produces.

Keywords: Autonomous system, Trust, Intertia

DOI: 10.54941/ahfe1005015

Cite this paper:

Downloads
37
Visits
115
Download