Privacy Concerns in Recommender Systems for Personalized Learning at the Workplace: The Mediating Role of Perceived Trustworthiness
Open Access
Article
Conference Proceedings
Authors: Marina Klostermann, Lina Kluy
Abstract: Artificial intelligence (AI) is capable of reconfiguring activities in Human Resource Management (HRM), including talent acquisition, performance management and learning and development (Minbaeva, 2021). The integration of AI into HRM systems can optimize processes, such as comprehensive needs assessments for learning and development, which would otherwise be lengthy and time-consuming. Moreover, the integration of AI in HRM has the potential to enhance decision-making processes and employee experience (Strohmeier, 2020). The use of big data and personal information in AI-based HRM systems to provide employees with personalized learning recommendations gives rise to privacy concerns. These concerns must be addressed in order to guarantee a responsible and calibrated use of these technologies. In the event that users express concerns regarding the adequate protection of their personal information by the system, they may perceive the system as untrustworthy and, consequently, refrain from using the system. In the context of privacy concerns, trust(worthiness) is assumed to be one of the most crucial predictors of behavior (e.g., intention to use a system). However, the explicit role of perceived trustworthiness in the relationship between privacy concerns and the intention to use an AI-based system has yet to be demonstrated. The aim of the present study was to investigate whether there exists a mediating effect of perceived trustworthiness on the relation between privacy concerns and the intention to use an AI-based recommender system for workplace learning. An online experiment was developed to simulate such a system. The analysis of this study is based on data of 69 participants (employees, 29 female, age M = 33.28 years, SD = 10.49) from one of the two experimental conditions, in which they were permitted to determine which personal information to provide for a personalized learning recommendation. The mean interaction time with the recommender system was 43.23 minutes (SD = 18.64). The participants completed questionnaires addressing a range of different constructs, including perceived trustworthiness, privacy concerns and intention to use. Contrary to previous studies postulating privacy concerns as a predictor of privacy behavior, the analysis showed no direct effect of privacy concerns on intention to use the system (B = -0.001, p > .05). However, the results indicated that privacy concerns significantly predicted perceived trustworthiness (B = -0.170, p < .05), which in turn significantly predicted the intention to use the system (B = 0.936, p < .01). Therefore, privacy concerns exert an indirect influence on the intention to use the system through perceived trustworthiness. The results underscore the significance of perceived trustworthiness in the context of privacy concerns and the intention to use an AI-based recommender system for workplace learning. This study represents a preliminary step towards addressing the research gap on the role of trust(worthiness) in the context of privacy concerns, as proposed by previous studies. Implications can be derived for the design of human-centered recommender systems for workplace learning, taking into account increasing perceived trustworthiness and reducing privacy concerns. Future research should continue to investigate additional factors in the relationship of privacy concerns, attitudes and behavior, for instance, perceived control over personal information.
Keywords: privacy concerns, trustworthiness, intention to use, recommender system, workplace learning, artificial intelligence
DOI: 10.54941/ahfe1005906
Cite this paper:
Downloads
16
Visits
72