Optimizing AI System Security: An Ecosystem Recommendation to Socio-Technical Risk Management
Open Access
Article
Conference Proceedings
Authors: Kitty Kioskli, Antonios Ramfos, Steve Taylor, Leandros Maglaras, Ricardo Lugo
Abstract: Given the sophistication of adversarial machine learning (ML) attacks on Artificial Intelligence (AI) systems, enhanced security frameworks that integrate human factors into risk assessments are critical. This paper presents a comprehensive methodology combining cybersecurity, cyberpsychology, and AI to address human-related aspects of these attacks. It introduces an AI system security optimization ecosystem to help security officers protect AI systems against various attacks, including poisoning, evasion, extraction, and inference. The risk management approach enhances NIST and ENISA frameworks by incorporating socio-technical aspects of adversarial ML threats. By creating digital clones and using explainable AI (XAI) techniques, the human elements of attackers are integrated into security risk management. An innovative conversational agent is proposed to include defenders’ perspectives, advancing the design and deployment of secure AI systems and guiding future certification schemes.
Keywords: AI System Security, Socio-Technical Risk Management, Explainable AI (XAI), Cybersecurity Frameworks
DOI: 10.54941/ahfe1005635
Cite this paper:
Downloads
16
Visits
135