Assessing the Transparency and Explainability of AI Algorithms in Planning and Scheduling tools: A Review of the Literature

Open Access
Article
Conference Proceedings
Authors: Sofia MorandiniFederico FraboniEnzo BalattiAranka HackmannHannah BrendelGabriele PuzzoLucia VolpiDavide GiusinoMarco De AngelisLuca Pietrantoni

Abstract: As AI technologies enter our working lives at an ever-increasing pace, there is a greater need for AI systems to work synergistically with humans at work. One critical requirement for such synergistic human-AI interaction is that the AI systems' behavior be explainable to the humans in the loop. The performance of decision-making by artificial intelligence has exceeded the capability of human beings in many specific domains. In the AI decision-making process, the inherent black-box algorithms and opaque system information lead to highly correct but incomprehensible results. The need for explainability of intelligent decision-making is becoming urgent and a transparent process can strengthen trust between humans and machines. The As AI technologies enter our working lives at an ever-increasing pace, there is a greater need for AI systems to work synergistically with humans at work. One critical requirement for such synergistic human-AI interaction is that the AI systems' behavior be explainable to the humans in the loop. The performance of decision-making by artificial intelligence has exceeded the capability of human beings in many specific domains. In the AI decision-making process, the inherent black-box algorithms and opaque system information lead to highly correct but incomprehensible results. The need for explainability of intelligent decision-making is becoming urgent and a transparent process can strengthen trust between humans and machines. The TUPLES project, a three-year Horizon Europe R&I project, aims to bridge this gap by developing AI-based planning and scheduling (P&S) tools using a comprehensive, human-centered approach. TUPLES leverages data-driven and knowledge-based symbolic AI methods to provide scalable, transparent, robust, and secure algorithmic planning and scheduling systems solutions. It adopts a use-case-oriented methodology to ensure practical applicability. Use cases are chosen based on input from industry experts, cutting-edge advances, and manageable risks (e.g., manufacturing, aviation, waste management). The EU guidelines for Trustworthy Artificial Intelligence highlight key requirements such as human agency and oversight, transparency, fairness, societal well-being, and accountability. The Assessment List for Trustworthy Artificial Intelligence (ALTAI) is a practical self-assessment tool for businesses and organizations to evaluate their AI systems. Existing AI-based P&S tools only partially meet these criteria, so innovative AI development approaches are necessary. We conducted a literature review to explore current research on AI algorithms' transparency and explainability in P&S, aiming to identify metrics and recommendations. The findings highlighted the importance of Explainable AI (XAI) in AI design and implementation. XAI addresses the black box problem by making AI systems explainable, meaningful, and accurate. It uses pre-modeling, in-modeling, and post-modeling explainability techniques, relying on psychological concepts of human explanation and interpretation for a human-centered approach. The review pinpoints specific XAI methods and offered evidence to guide the selection of XAI tools in planning and scheduling.

Keywords: Human-AI Interaction, Trustworthy AI, decision-making

DOI: 10.54941/ahfe1004068

Cite this paper:

Downloads
290
Visits
780
Download