AI Audit as a tool for effective AI risk management
Open Access
Article
Conference Proceedings
Authors: Herwig Zeiner
Abstract: Artificial intelligence (AI), especially large language models (LLMs), is increasingly permeating all industries and promises transformative potential. Such large language models demonstrate impressive abilities in text generation, data analysis and even creative tasks. However, this rapid proliferation and increase in performance goes hand in hand with a growing awareness and concern about the manifold risks these technologies pose. The range of potential harms extends from operational malfunctions and data privacy breaches to profound systemic impacts on society and the economy. Given this duality of benefit and risk, there is an urgent need for robust governance, standardized risk management practices, and effective mitigation strategies. AI certification, specific risks associated with LLMs, corresponding mitigation techniques (technical and organizational), and the emerging concept of systemic AI risk. The standardization landscape for AI is still fragmented, but it is showing clear signs of convergence. AI audit using standards such as ISO 42001 and specific LLM risks support this process of security impact assessment.
Keywords: AI audit, artificial intelligence, transparency, traceability, fairness, risk management, trust, efficiency, effectiveness
DOI: 10.54941/ahfe1006101
Cite this paper:
Downloads
24
Visits
36