Tackling Human Factors in Aviation Safety - An Application of AI Facial Recognition Technology
Open Access
Article
Conference Proceedings
Authors: Yanbing Chen, Shuzhen Luo, Andy Dattel
Abstract: Transportation accidents, are significantly affected by human factors, which account for a substantial proportion of incidents and fatalities. Factors such as fatigue, stress, illness, medication, and substance use impair pilot performance, leading to compromised decision-making, reduced situational awareness, and increased risk-taking behavior (Wingelaar-Jagt et al., 2021). While regulatory guidelines and medical evaluations exist to address these challenges, current measures often rely on self-reporting and subjective assessments that can be prone to bias. Artificial Intelligence (AI) driven facial recognition model has been used in other industries to assess human subjects’ health status (Chan et al., 2024) and cognitive workload (Iarlori et al., 2024). This research aims to develop an AI-driven facial recognition model to objectively assess pilot fitness to fly by analyzing micro expressions, facial symmetry, eye movement, and other biomarkers that reflect fatigue, stress, and impairment. The AI model will be trained using publicly available datasets containing facial images of individuals in varying conditions such as fatigue, drowsiness, stress, sadness, and under the influence of alcohol, drugs, or medication. Data preprocessing will employ facial landmark detection and attention-based image segmentation to isolate key facial regions, including the eyes (tracking movement and redness), mouth (symmetry, dryness, or tremor), and skin tone (color changes indicative of intoxication or stress)(Chan et al., 2024). Model training will leverage deep convolutional neural networks (CNNs), utilizing transfer learning techniques to enhance performance with smaller datasets. There are three tasks in this research. Task 1 focuses on model building using secondary data from publicly available facial image datasets in different conditions. Task 2 involves a laboratory-based experiment with healthy individuals to validate and refine the AI algorithm’s accuracy in detecting cognitive performance changes under stress. Participants will perform cognitive tasks under high-stress conditions, and facial images will be captured to fine-tune the algorithm. Task 3 includes a pilot simulation-based experiment to fine-tune the AI algorithm for aviation-specific applications. Licensed pilots will perform flight simulation tasks under high-workload or stressful conditions, such as emergency scenarios and adverse weather conditions. Data from facial images and simulator metrics like decision-making speed, navigation accuracy, and task prioritization will be analyzed to adapt the AI algorithm for real-time, aviation-specific assessments. The integration of this technology into preflight screening process will provide real-time, non-invasive assessments, complementing existing protocols and enhancing aviation safety by offering early warnings of performance degradation, thereby reducing accident risks and improving operational efficiency. Such AI facial recognition technology can also be utilized in-flight to detect subtle cues informing the pilot of their assessed condition. The authors would like to acknowledge Embry-Riddle Aeronautical University – FIRST Program for the funding provided. The authors would also like to acknowledge the consistent support from College of Aviation - School of Graduate Studies, and College of Engineering - Mechanical Engineering department.
Keywords: Facial Recognition, public safety, safety policy, human factor, human error
DOI: 10.54941/ahfe1007013
Cite this paper:
Downloads
9
Visits
43


AHFE Open Access