Towards a Human-Centric AI Trustworthiness Risk Management Framework

Open Access
Article
Conference Proceedings
Authors: Kitty KioskliLaura BishopNineta PolemiAntonis Ramfos

Abstract: Artificial Intelligence (AI) aims to replicate human behavior in socio-technical systems, with a strong focus on AI engineering to replace human decision-making. However, an overemphasis on AI system autonomy can lead to bias, unfair, non-ethical decisions, and thus a lack of trust, resulting in decreased performance, motivation, and competitiveness. To mitigate these AI threats, developers are incorporating ethical considerations, often with input from ethicists, and using technical tools like IBM's Fairness 360 and Google's What-If tool to assess and improve fairness in AI systems. These efforts aim to create more trustworthy and equitable AI technologies. Building trustworthiness in AI technology does not necessarily imply that the human user will fundamentally trust it. For humans to use technology trust must be present, something challenging when AI lacks a permanent/stable physical embodiment. It is also important to ensure humans do not over-trust resulting in AI misuse. Trustworthiness should be assessed in relation to human acceptance, performance, satisfaction, and empowerment to make design choices that grant them ultimate control over AI systems, and the extent to which the technology meets the business context of the socio-technical system where it's used. For AI to be perceived as trustworthy, it must also align with the legal, moral, ethical principles, and behavioral patterns of its human users, whilst also considering the organizational responsibility and liability associated with the socio-technical system's business objectives. Commitment to incorporating these principles to create secure and effective decision support AI systems will offer a competitive advantage to organizations that integrate them.Based on this need, the proposed framework is a synthesis of research from diverse disciplines (cybersecurity, social and behavioral sciences, ethics) designed to ensure the trustworthiness of AI-driven hybrid decision support while accommodating the specific decision support needs and trust of human users. Additionally, it aims to align with the key performance indicators of the socio-technical environment where it operates. This framework serves to empower AI system developers, business leaders offering AI-based services, as well as AI system users, such as educators, professionals, and policymakers, in achieving a more absolute form of human-AI trustworthiness. It can also be used by security defenders to make fair decisions during AI incident handling. Our framework extends the proposed NIST AI Risk Management Framework (AI-RFM) since at all stages of the trustworthiness risk management dynamic cycle (threat assessment, impact assessment, risk assessment, risk mitigation), human users are considered (e.g., their morals, ethics, behavior, IT maturity) as well as the primary business objectives of the AI socio-technical system under assessment. Co-creation and human experiment processes must accompany all stages of system management and are therefore part of the proposed framework. This interaction facilitates the execution of continuous trustworthiness improvement processes. During each cycle of trustworthiness risk mitigation, human user assessment will take place, leading to the identification of corrective actions and additional mitigation activities to be implemented before the next improvement cycle. Thus, the main objective of this framework is to help build ‘trustworthy’ AI systems that are ultimately trusted by their users.

Keywords: human factors, trustworthiness, Artificial Intelligence, Risk Management

DOI: 10.54941/ahfe1004766

Cite this paper:

Downloads
127
Visits
225
Download