AI as a leader - what individual factors influence the acceptance of AI applications that take on leadership tasks?
Open Access
Article
Conference Proceedings
Authors: Deborah Petrat, Lucas Polanski-Schräder, Ilker Yenice, Lukas Bier, Ilka Subtil
Abstract: In times of digital transformation and the rise of Artificial Intelligence (AI), there is a constant power struggle between technology and humans. Due to the advancing development of digitalization, Artificial Intelligence (AI) is no longer a future version. Through various methods, such as machine learning, it is now already possible to work with a large amount of data. The goal of AI development is to support people in the best possible way in both professional and private contexts (Buxmann & Schmidt, 2018). It is already capable of relieving leaders in a company, for example, by allowing routine, steering and/or deployment tasks to be taken over by AI applications, so that leaders have more time for their employees and can focus on the strategic development of their own area of responsibility (Manyika et al., 2017; Offensive Mittelstand, 2018). In the case of an AI manager, his or her successful integration will ultimately depend on whether employees and even other human managers will accept an algorithm's instructions (Sahota & Ashley, 2019). It will be critical to the subsequent successful implementation of AI as a leader to determine what application-specific concerns exist and what specific expectations are placed on the design. Therefore the research question is asked: What individual factors of human leaders and their employees influence the acceptance of AI as a leader? To answer the question, four hypotheses are operationalized in an online survey with N=74 that collects data on leaders' and employees´ acceptance and expectations of AI as a leader. The questionnaire is based on literature and already established instruments. To survey the acceptance of the subjects, the technology acceptance model (TAM) proposed by Davis (1985) is followed by asking the perceived usefulness (PU) and the perceived ease of use (PEU). In the absence of concrete AI applications that embody the identity of an executive, three use cases from the corporate landscape are used as templates for three scenarios (digital cognitive assistant in staff recruitment, in supervision in form of a smart screen and a physical autonomous system in a form of a robot). It is found that technology affinity as well as commitment have an impact on the acceptance of AI leaders. Technology-related factors predicted higher acceptance for an AI leader that is a cognitive assistance in supervision. In this case, participants who indicated more technological expertise or involvement in AI activities perceived AI leaders as easier to use. As expected, the effect of age on perceived ease of use was mediated by technology affinity (for all scenarios and aggregated), such that older respondents had lower technology affinity and thus lower perceptions of the ease of use of AI leaders. In addition, whether the user had managerial responsibilities or not did not matter for acceptance. Most respondents were convinced that AI-powered leadership will change organizations in terms of new job profiles and new skills, however, they did not believe in a radical transformation any time soon. The obligatory requirements are to work as transparently as possible. The first step has been taken, which now needs to be confirmed in a broad-based study.
Keywords: Artificial intelligence, leadership, leadership roles, future of work, expectations, acceptance, cognitive assistants, transformation, transparency
DOI: 10.54941/ahfe1002233
Cite this paper:
Downloads
420
Visits
1354