A Human Centric Design Approach for Future Human-AI Teams in Aviation

Open Access
Article
Conference Proceedings
Authors: Barry KirwanRoberto VendittiNikolas GiampaoloMiguel Villegas Sánchez

Abstract: Human Factors and Aviation have been effective partners for decades, with systematic research leading to guidance and regulations on cockpit design, air traffic control display and interaction design, fatigue management and crew resource management, all of which have helped aviation maintain its record as the safest mode of transportation. The introduction of Artificial Intelligence (AI) in aviation has already begun, with Machine Learning systems supporting aviation workers in a number of areas. But so far, such AI additions can be seen as 'just more automation', as the human - whether pilot or air traffic controller - remains very much in command and control, maintaining situation awareness and being the principal safety barrier against accidents. With the advent of future AI systems likely to appear in the next decade, this is likely to change. AI systems are envisaged, and already being researched, that will have a higher degree of autonomy. A collaborative relationship is foreseen - generically known as Human-AI Teaming - in which the human will 'partner' with 'Intelligent Assistants'. This may include the AI deciding what to do and executing its own tasks, negotiating with the human crew, and even reconsidering its goals as part of the team. Human-AI Teaming raises a host of questions and challenges for Human Factors, such as how to achieve trust between human and AI, how to achieve satisfactory 'explainability' functions in the AI so the human can understand its advice and choices, as well as designing means of human-AI interaction, whether visual, verbal or gestural. An overriding question, however, is how to ensure that the AI design remains human centric, so that the human can still maintain their safety function and an overview of system performance, able to detect and step in if the AI goes wrong or finds it is out of its depth. Given the prospect of such advanced human-AI teaming scenarios, Human Factors will need to raise its game to assure human centricity of aviation systems design.As a first step towards preparing Human Factors for Human-AI teaming (HAT), the European HAIKU project has developed a provisional methodology and applied it to several 'use cases'. These use cases involve AI support in emergencies to a single pilot in the cockpit, AI support to flight crew who have to deviate to a different airport, AI support to remote tower controllers in dealing with arriving and departing aircraft, and an executive manager of pilot-less drone and sky-taxi traffic in urban environments. These four use cases vary in terms of their AI autonomy, and so are a reasonable test-bed for applying new Human Factors approaches.A six-step Human Factors process to designing human-centric HAT systems has been developed:Task AnalysisHuman-AI Teaming RequirementsHuman HAZOPHuman-in-the-loop SimulationsTraining & Operational Readiness TestingMonitoring, Adapting and LearningThe paper will focus mainly on the first three steps, in the context of the use cases, highlighting the novel issues found, e.g. relating to different explainability requirements depending on the AI's function, interaction design considerations, and training requirements that go beyond what is normally required. In particular, the approach monitors subtle shifts that can occur in the role and responsibilities of the human operator, which helps determine whether the system is in danger of becoming less human centric in nature, and identifying what the safety-related consequences could be.:

Keywords: Human Factors, Human AI Teaming, Aviation

DOI: 10.54941/ahfe1005464

Cite this paper:

Downloads
139
Visits
217
Download