Can Drivers Construct Accurate Mental Models of Autopilot?

Open Access
Article
Conference Proceedings
Authors: Hugh SalehiDaniel McgeheeJohn GasparCher Carney

Abstract: More car companies are working on integrating partial automated driving capabilities into their cars. Drivers can switch vehicle control to the automated system on designated highways by using partial automated driving mode. The system uses technologies like adaptive cruise control, lane centering, and driver-monitoring to guarantee that drivers stay attentive to the road while supervising the automation. The promise is that partial automated driving will increase road safety. Although the system can take over some driving tasks, it still needs human supervision and intervention, and it is restricted to designated highways. The system should be monitored by humans actively, with the intention of intervening when necessary. The level of trust drivers built into the system affects safe monitoring and taking back control when necessary. Creating the right level of trust in a system requires shaping a correct mental model of the system. To achieve this, it's necessary to comprehend how the system reacts in various scenarios. Our research analyzed the drivers' mental models of Tesla's autopilot in various situations that are likely to be confusing or have caused car accidents with Tesla in the past. The drivers were unfamiliar with autonomous driving and had no previous experience with adaptive cruise control.Method: We conducted an experiment with 10 individuals who drove Tesla S equipped with partial automation known as Autopilot for a week while commuting daily. The subjects took part in an open-ended interview at the end of the week to unfold their mental models. The interviews were recorded and transcribed for thematic analysis. Results: Although the majority of participants' initial mental models were in line with the system's capabilities, there are concerns about mental models related to Tesla autopilot limitations, particularly in city driving, based on our findings. Two safety-critical themes were identified in the interviews: misunderstandings of the autopilot's limitations and misunderstandings of the system's purpose. Misunderstandings regarding limitations included all statements and sub-themes indicating the participants' confusion about situations where Tesla's autopilot can't be used. Misunderstandings of the system purpose included statements expressing participants' misunderstanding of their monitoring role in autonomous systems as supervisors. Our findings suggest that new drivers should be trained in autonomous driving to increase safety.

Keywords: Human automation interaction, trust, road safety, AV

DOI: 10.54941/ahfe1006505

Cite this paper:

Downloads
16
Visits
52
Download