CogDriver: The Longest-Running Autonomous Driving Cognitive Model Exhibits Human Factors

Open Access
Article
Conference Proceedings
Authors: Christian WastaSiyu WuFrank E. Ritter

Abstract: We are exploring how models can use models of human perception and motor control to interact directly with interfaces. We present CogDriver, a cognitive driving model capable of performing a long-duration autonomous driving task in a virtual simulation environment. This model, built using the ACT-R cognitive architecture and enhanced with robotic hands and eyes, supports the cognitive-perceptual-motor knowledge essential for simple human driving. It has two main strengths compared to other autonomous driving models: (a) it is built upon human-observed driving behavior, incorporating error-making and learning, and (b) it leverages a cognitive architecture to provide insights into psychological driving behavior. Compared to our previous version, this model shows improved endurance, maintaining its driving state for over 18 h from Tucson to Las Vegas, even under nighttime conditions. The enhancements were realized through incorporating human-like driving knowledge representations, and actions. It now includes a model of error handling and several logical visual cue strategies. The model's predictions can match certain aspects of human behavior in fine detail, such as the number of course corrections, average speed, learning rate, and adaptation to low visibility conditions. This model demonstrates that (a) perception and action loops with fallback handling provide a very accessible testbed for examining further aspects of behavior and (b) the model-task combination supports exploring aspects of human behavior that remain missing from ACT-R. Model, simulation, and data can be accessed at https://github.com/christianwasta/DriveBus/tree/drivebus-wasta.

Keywords: Cognitive autonomous driving, ACT-R, ACT-R cognitive architecture, CogDriver

DOI: 10.54941/ahfe1006100

Cite this paper:

Downloads
20
Visits
56
Download