Trusting AI: Factors Influencing Willingness of Accountability for AI-Generated Content in the Workplace

Open Access
Article
Conference Proceedings
Authors: Ulrike AumüllerEike Meyer

Abstract: In the rapidly evolving landscape of Artificial Intelligence (AI) and business ethics, a critical area of focus has emerged: the willingness of leadership to assume responsibility for AI-generated content in decision-making processes. While the current public discourse predominantly addresses AI’s impact on customer service, potential biases, and job displacement, etc., a less explored yet significant aspect is how AI reshapes tasks and roles within organizations, particularly in decision-making.AI’s capability to analyse vast data sets expeditiously supports both operational and strategic decisions across various sectors. However, this support comes with ambivalent outcomes, ranging from enhanced efficiency to risk of taking decisions with negative business impact based on AI outputs with hidden biases. Such ambiguity can undermine trust in AI, especially when the rationale behind AI-generated recommendations is opaque.The central question of this paper will be the extent to which leaders are prepared to be accountable for decisions made based on AI insights. This includes scenarios where leaders themselves make AI-driven decisions, as well as situations where they are responsible for overseeing and endorsing decisions made by their team members based on artificial intelligence.In understanding the adoption of AI in decision-making, key factors influencing trust and usage of algorithms emerge. Research suggests, for example, that trust extends beyond algorithm accuracy, significantly influenced by social validation such as prior adoption by others, which can reduce cognitive load and improve engagement. Furthermore, cultural and age differences may play a crucial role. Additionally, an expectation of near-perfection performance from automated systems might lead to scepticism, especially when an algorithm falters, impacting ongoing trust and usage. These elements and many more might be vital in evaluating the readiness to assume responsibility for AI-generated decisions in the workplace.This paper aims to identify and categorize those criteria that have a central effect on the willingness to assume responsibility for AI facilitated decisions and AI generated content in companies. Those categories may later serve as a framework to be considered by management when adopting a strategy concerning their policies for AI based decision-making processes.

Keywords: AI, decision-making process, trust, responsibility

DOI: 10.54941/ahfe1005355

Cite this paper:

Downloads
84
Visits
231
Download