Charting Trustworthiness: A Socio-Technical Perspective on AI and Human Factors
Open Access
Article
Conference Proceedings
Authors: Theofanis Fotis, Kitty Kioskli, Eleni Seralidou
Abstract: Integrating AI into critical decision-making environments, including cybersecurity, highlights the importance of understanding human factors in fostering trust and ensuring safe human-AI collaboration. Existing research emphasizes that personality traits, such as openness, trust propensity, and affinity for technology, significantly influence user interaction with AI systems, impacting trustworthiness and reliance behaviours. Furthermore, studies in cybersecurity underscore the socio-technical nature of threats, with human behaviour contributing to a significant portion of breaches. Addressing these insights, the study discusses the development and validation of a questionnaire designed to assess personality-driven factors in AI trustworthiness, advancing tools to mitigate human-centric risks in cybersecurity. Building on interdisciplinary foundations from cyberpsychology, human-computer interaction, and behavioural sciences, the questionnaire evaluates dimensions including ethical responsibility, collaboration, technical competence, and adaptability. Subject matter experts systematically reviewed items to ensure face and content validity, reflecting theoretical and empirical insights from prior studies on human behaviour and cybersecurity resilience. The tool’s scoring system employs weighted Likert-scale responses, enabling detailed evaluations of trust dynamics and identifying key areas for intervention. By bridging theoretical and applied perspectives, this research contributes to advancing the role of human factors in cybersecurity, offering actionable insights for the design of trustworthy AI systems and calibrated trust practices.
Keywords: Artificial Intelligence, human factors, cybersecurity, trustworthiness, co-creation
DOI: 10.54941/ahfe1006137
Cite this paper:
Downloads
21
Visits
122