How system failures and ransomwares affect drivers' trust and attitudes in an automated car? A simulator study

Open Access
Article
Conference Proceedings
Authors: William PayreJaume Perelló-MarchGiedre SabaliauskaiteHesamaldin JadidbonabSiraj ShaikhStewart Birrell

Abstract: Conditionally automated driving systems (SAE level 3) are able to process the lateral and longitudinal control of a vehicle and warn the driver of the status of the system and ongoing operations. The driver must monitor the system and resume control if prompted to. Previous research in the realm of automated driving explored how in-vehicle information should be presented to optimise drivers’ trust in the system (Wintersberger et al, 2020). For instance, conveying the status and actions of the system contributes to transparency and support adequate trust in the system (Carsten & Martens, 2019). Yet, little is known on the consequences of failing to provide reliable information on the vehicle status and operations on trust. This is particularly salient in the case of silent failures, whereby the system fails to notify the driver of its limit and incapacity to operate reliably (Louw et al., 2019). This lack of empirical evidence is surprising as automation failures are likely to affect drivers’ trust in the system (Payre et al., 2015; 2017), therefore leading to disuse (e.g. no use), misuse (e.g. unsafe operation as reported by the National Transportation Safety Board Tesla crash report, 2017) or abuse (e.g. take advantage of the limits) of such system (Parasuraman, 1997). Past work has stressed that the subjective level of trust of users should be aligned with the capabilities of the automation to mitigate the undesirable effect of overtrust (i.e. using the automated system despite its unreliability) and distrust (i.e. not using the system although it is reliable; Khastgir et al., 2018). This process has been identified as trust calibration (Lee & See, 2004). Even though a wealth of studies has shown what and how information should be presented to support trust calibration, little research attention has been devoted to understand if, how and when failures affect individuals’ trust in the automated system and subsequent impact on driving performance. Addressing this research gap, the present study combines a cyber security and human factors approach to investigate the effect of the type of failure (silent vs. explicit) and its timing (early vs. late during the journey) on individuals’ trust, attitudes and safety. From the cyber security perspective, a threat analysis of in-vehicle digital displays was conducted. This led to a series of use cases being developed when possible malfunction or intrusion (e.g. hacking) would occur. These use cases were developed in a driver-in-the-loop simulator where participants’ responses (N = 37) with respect to trust in the automation, driving performance, and safety were collected. Results from this experiment are discussed in the context of road safety, attitudes and driver behaviour (e.g. manual handover, acceptance and trust).

Keywords: Trust, Automation, Driving, Behaviour, Cyber Security, Attitudes, Acceptance, Failure, Display, HMI

DOI: 10.54941/ahfe1002764

Cite this paper:

Downloads
139
Visits
491
Download