Augmented cognition requires a psychologically sound human role: a methodical approach

Open Access
Article
Conference Proceedings
Authors: Patrick ZinsliStephanie KaltNerissa DettlingSamira HamoucheToni Waefler

Abstract: The capabilities of AI and the quality of AI-generated output are increasing at an unprecedented rate. At the same time, the challenges of human-AI collaboration are also growing. This is because the better an AI performs, the more difficult it becomes for humans to recognize AI malfunctions. For example, while hallucinations from a poor LLM are quite obvious and therefore easy to identify, hallucinations from a powerful LLM are sophisticated and opaque, making them increasingly difficult for humans to detect. Challenges in human-AI collaboration described e.g. by Endsley (2023) are therefore not symptoms of a new technology’s teething problems, but rather inherent in AI itself. Essentially, the challenge lies in the fact that humans are expected to act as a firewall for AI deficiencies. However, to supervise an AI that processes much more data humans are capable of by a model humans do not understand, is a task that exceeds human capabilities. As a consequence, humans are not suited to take on the task of monitoring AI or evaluating AI-generated recommendations and bearing responsibility for them. Against this background the HORIZON project AI4REALNET (cf. ai4realnet.eu) aims to research AI-based solutions addressing critical systems (electricity, railway and air traffic management) that are traditionally operated by humans, and where AI systems complement and augment human abilities. As a part of the project the “Supportive AI Framework” (Waefler et al., 2025) was developed and presented at HCII 2025 in Gothenburg, Sweden. This framework aims at an intensified human-AI collaboration (Waefler, 2021), in which humans are active participants rather than passive observers of AI or recipients of AI-generated information. Rather humans and AI are considered a joint cognitive system (Hollnagel & Woods, 2005) based on their qualitatively different but complementary strengths and weaknesses. With the aim to augment human cognitive abilities, the framework conceptualizes ways for AI to explicitly support human cognitive processes such as decision-making or learning. The paper proposed in this abstract covers the methodological part of the “Supportive AI Framework”. A procedure is presented, together with suitable tools, that supports the analysis and design of human-AI collaboration based on cognitive task analysis. Special attention is paid to the creation of a psychologically coherent human role in human-AI collaboration. This is to avoid the negative consequences of AI integration as described e.g. by Endsely (2023) or Bucinca et al. (2024) (e.g. deskilling, demotivation or cognitive overstraining). The method provides guidance on creating detailed descriptions of the roles of humans and AI in specific scenarios, as well as how they collaborate. It also includes a detailed analysis of the (tacit) knowledge and skills that humans need to fulfill their assigned roles, as well as how these are acquired. Both is critical to avoid deskilling. The paper describes the method in detail and illustrates its application using examples from projects where augmented human cognition in knowledge intensive tasks is envisioned. The aim in this projects is to combine humans and AI in the tradition of sociotechnical system design and complementary function allocation.

Keywords: Human-AI system, Human-AI function allocation, Task analysis, Human role design, Human-AI collaboration design

DOI: 10.54941/ahfe1007183

Cite this paper:

Downloads
0
Visits
1
Download